{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n\n# Forecasting\n\nIn this example, we present a full forecasting workflow for the Sioux Falls\nexample model.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Imports\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from uuid import uuid4\nfrom tempfile import gettempdir\nfrom os.path import join\nfrom aequilibrae.utils.create_example import create_example\nimport logging\nimport sys" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We create the example project inside our temp folder\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fldr = join(gettempdir(), uuid4().hex)\n\nproject = create_example(fldr)\nlogger = project.logger\n\n# We get the project open, and we can tell the logger to direct all messages to the terminal as well\nstdout_handler = logging.StreamHandler(sys.stdout)\nformatter = logging.Formatter(\"%(asctime)s;%(levelname)s ; %(message)s\")\nstdout_handler.setFormatter(formatter)\nlogger.addHandler(stdout_handler)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Traffic assignment with skimming\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from aequilibrae.paths import TrafficAssignment, TrafficClass" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We build all graphs\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "project.network.build_graphs()\n# We get warnings that several fields in the project are filled with NaNs. \n# This is true, but we won't use those fields" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We grab the graph for cars\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "graph = project.network.graphs[\"c\"]\n\n# Let's say we want to minimize the free_flow_time\ngraph.set_graph(\"free_flow_time\")\n\n# And will skim time and distance while we are at it\ngraph.set_skimming([\"free_flow_time\", \"distance\"])\n\n# And we will allow paths to be computed going through other centroids/centroid connectors\n# required for the Sioux Falls network, as all nodes are centroids\ngraph.set_blocked_centroid_flows(False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We get the demand matrix directly from the project record\nSo let's inspect what we have in the project\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "proj_matrices = project.matrices\nprint(proj_matrices.list())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's get it in this better way\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "demand = proj_matrices.get_matrix(\"demand_omx\")\ndemand.computational_view([\"matrix\"])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "assig = TrafficAssignment()\n\n# Create the assignment class\nassigclass = TrafficClass(name=\"car\", graph=graph, matrix=demand)\n\n# The first thing to do is to add at list of traffic classes to be assigned\nassig.add_class(assigclass)\n\n# We set these parameters only after adding one class to the assignment\nassig.set_vdf(\"BPR\") # This is not case-sensitive \n\n# Then we set the volume delay function\nassig.set_vdf_parameters({\"alpha\": \"b\", \"beta\": \"power\"}) # And its parameters\n\nassig.set_capacity_field(\"capacity\") # The capacity and free flow travel times as they exist in the graph\nassig.set_time_field(\"free_flow_time\")\n\n# And the algorithm we want to use to assign\nassig.set_algorithm(\"bfw\")\n\n# Since I haven't checked the parameters file, let's make sure convergence criteria is good\nassig.max_iter = 1000\nassig.rgap_target = 0.001\n\nassig.execute() # we then execute the assignment" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Convergence report is easy to see\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import pandas as pd\n\nconvergence_report = assig.report()\nprint(convergence_report.head())" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "volumes = assig.results()\nprint(volumes.head())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We could export it to CSV or AequilibraE data, but let's put it directly into the results database\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "assig.save_results(\"base_year_assignment\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And save the skims\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "assig.save_skims(\"base_year_assignment_skims\", which_ones=\"all\", format=\"omx\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Trip distribution\nCalibration\n~~~~~~~~~~~\nWe will calibrate synthetic gravity models using the skims for TIME that we just generated\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\nfrom aequilibrae.distribution import GravityCalibration" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's take another look at what we have in terms of matrices in the model\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(proj_matrices.list())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We need the demand\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "demand = proj_matrices.get_matrix(\"demand_aem\")\n\n# And the skims\nimped = proj_matrices.get_matrix(\"base_year_assignment_skims_car\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can check which matrix cores were created for our skims to decide which one to use\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "imped.names\n\n# Where ``free_flow_time_final`` is actually the congested time for the last iteration" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But before using the data, let's get some impedance for the intrazonals\nLet's assume it is 75% of the closest zone\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "imped_core = \"free_flow_time_final\"\nimped.computational_view([imped_core])\n\n# If we run the code below more than once, we will be overwriting the diagonal values with non-sensical data\n# so let's zero it first\nnp.fill_diagonal(imped.matrix_view, 0)\n\n# We compute it with a little bit of NumPy magic\nintrazonals = np.amin(imped.matrix_view, where=imped.matrix_view > 0, initial=imped.matrix_view.max(), axis=1)\nintrazonals *= 0.75\n\n# Then we fill in the impedance matrix\nnp.fill_diagonal(imped.matrix_view, intrazonals)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since we are working with an OMX file, we cannot overwrite a matrix on disk\nSo we give a new name to save it\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "imped.save(names=[\"final_time_with_intrazonals\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This also updates these new matrices as those being used for computation\nAs one can verify below\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "imped.view_names" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We set the matrices for being used in computation\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "demand.computational_view([\"matrix\"])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "for function in [\"power\", \"expo\"]:\n gc = GravityCalibration(matrix=demand, impedance=imped, function=function, nan_as_zero=True)\n gc.calibrate()\n model = gc.model\n # We save the model\n model.save(join(fldr, f\"{function}_model.mod\"))\n\n # We can save the result of applying the model as well\n # We can also save the calibration report\n with open(join(fldr, f\"{function}_convergence.log\"), \"w\") as otp:\n for r in gc.report:\n otp.write(r + \"\\n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Forecast\nWe create a set of 'future' vectors using some random growth factors.\nWe apply the model for inverse power, as the trip frequency length distribution\n(TFLD) seems to be a better fit for the actual one.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from aequilibrae.distribution import Ipf, GravityApplication, SyntheticGravityModel\nfrom aequilibrae.matrix import AequilibraeData" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We compute the vectors from our matrix\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "origins = np.sum(demand.matrix_view, axis=1)\ndestinations = np.sum(demand.matrix_view, axis=0)\n\nargs = {\n \"file_path\": join(fldr, \"synthetic_future_vector.aed\"),\n \"entries\": demand.zones,\n \"field_names\": [\"origins\", \"destinations\"],\n \"data_types\": [np.float64, np.float64],\n \"memory_mode\": False,\n}\n\nvectors = AequilibraeData()\nvectors.create_empty(**args)\n\nvectors.index[:] = demand.index[:]\n\n# Then grow them with some random growth between 0 and 10%, and balance them\nvectors.origins[:] = origins * (1 + np.random.rand(vectors.entries) / 10)\nvectors.destinations[:] = destinations * (1 + np.random.rand(vectors.entries) / 10)\nvectors.destinations *= vectors.origins.sum() / vectors.destinations.sum()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Impedance\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "imped = proj_matrices.get_matrix(\"base_year_assignment_skims_car\")\nimped.computational_view([\"final_time_with_intrazonals\"])\n\n# If we wanted the main diagonal to not be considered...\n# ``np.fill_diagonal(imped.matrix_view, np.nan)``" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "for function in [\"power\", \"expo\"]:\n model = SyntheticGravityModel()\n model.load(join(fldr, f\"{function}_model.mod\"))\n\n outmatrix = join(proj_matrices.fldr, f\"demand_{function}_model.aem\")\n args = {\n \"impedance\": imped,\n \"rows\": vectors,\n \"row_field\": \"origins\",\n \"model\": model,\n \"columns\": vectors,\n \"column_field\": \"destinations\",\n \"nan_as_zero\": True,\n }\n\n gravity = GravityApplication(**args)\n gravity.apply()\n\n # We get the output matrix and save it to OMX too,\n gravity.save_to_project(name=f\"demand_{function}_modeled\", file_name=f\"demand_{function}_modeled.omx\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We update the matrices table/records and verify that the new matrices are indeed there\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "proj_matrices.update_database()\nprint(proj_matrices.list())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### IPF for the future vectors\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "args = {\n \"matrix\": demand,\n \"rows\": vectors,\n \"columns\": vectors,\n \"column_field\": \"destinations\",\n \"row_field\": \"origins\",\n \"nan_as_zero\": True,\n}\n\nipf = Ipf(**args)\nipf.fit()\n\nipf.save_to_project(name=\"demand_ipfd\", file_name=\"demand_ipfd.aem\")\nipf.save_to_project(name=\"demand_ipfd_omx\", file_name=\"demand_ipfd.omx\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "df = proj_matrices.list()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Future traffic assignment\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from aequilibrae.paths import TrafficAssignment, TrafficClass" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "logger.info(\"\\n\\n\\n TRAFFIC ASSIGNMENT FOR FUTURE YEAR\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "demand = proj_matrices.get_matrix(\"demand_ipfd\")\n\n# Let's see what is the core we ended up getting. It should be 'gravity'\ndemand.names" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's use the IPF matrix\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "demand.computational_view(\"matrix\")\n\nassig = TrafficAssignment()\n\n# Creates the assignment class\nassigclass = TrafficClass(name=\"car\", graph=graph, matrix=demand)\n\n# The first thing to do is to add at a list of traffic classes to be assigned\nassig.add_class(assigclass)\n\nassig.set_vdf(\"BPR\") # This is not case-sensitive \n\n# Then we set the volume delay function\nassig.set_vdf_parameters({\"alpha\": \"b\", \"beta\": \"power\"}) # And its parameters\n\nassig.set_capacity_field(\"capacity\") # The capacity and free flow travel times as they exist in the graph\nassig.set_time_field(\"free_flow_time\")\n\n# And the algorithm we want to use to assign\nassig.set_algorithm(\"bfw\")\n\n# Since I haven't checked the parameters file, let's make sure convergence criteria is good\nassig.max_iter = 500\nassig.rgap_target = 0.00001" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**OPTIONAL**\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# If we want to execute select link analysis on a particular TrafficClass, we set the links we are analyzing.\n# The format of the input select links is a dictionary (str: list[tuple]).\n# Each entry represents a separate set of selected links to compute. The str name will name the set of links.\n# The list[tuple] is the list of links being selected, of the form (link_id, direction), as it occurs in the Graph.\n# Direction can be 0, 1, -1. 0 denotes bi-directionality\n# For example, let's use Select Link on two sets of links:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "select_links = {\n \"Leaving node 1\": [(1, 1), (2, 1)],\n \"Random nodes\": [(3, 1), (5, 1)],\n}\n# We call this command on the class we are analyzing with our dictionary of values\nassigclass.set_select_links(select_links)\n\nassig.execute() # we then execute the assignment" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let us save our select link results, all we need to do is provide it with a name\nIn addition to exporting the select link flows, it also exports the Select Link matrices in OMX format.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "assig.save_select_link_results(\"select_link_analysis\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Say we just want to save our select link flows, we can call:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "assig.save_select_link_flows(\"just_flows\")\n\n# Or if we just want the SL matrices:\nassig.save_select_link_matrices(\"just_matrices\")\n# Internally, the save_select_link_results calls both of these methods at once.\n\n# We could export it to CSV or AequilibraE data, but let's put it directly into the results database\nassig.save_results(\"future_year_assignment\")\n\n# And save the skims\nassig.save_skims(\"future_year_assignment_skims\", which_ones=\"all\", format=\"omx\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### We can also plot convergence\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n\ndf = assig.report()\nx = df.iteration.values\ny = df.rgap.values\n\nfig = plt.figure()\nax = fig.add_subplot(111)\n\nplt.plot(x, y, \"k--\")\nplt.yscale(\"log\")\nplt.grid(True, which=\"both\")\nplt.xlabel(r\"Iterations\")\nplt.ylabel(r\"Relative Gap\")\nplt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Close the project\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "project.close()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.18" } }, "nbformat": 4, "nbformat_minor": 0 }