• VOLTAR
  • PRINT
  • SEND
  • A+ A-
PRODIS - Distributed Processing Techniques for DEA Software

Introduction
Parallel and distributed computing in large-scale computer applications in engineering and sciences is widely adopted in research environments. In this way, Cepel, within the Department of Energy Optimization and Environment (DEA), adopted a solution to lower the final execution times of its main programs through the use of parallel processing.

The goal of the PRODIS project is to develop programming techniques to execute its main programs in a high performance environment using parallel processing. Currently, the NEWAVE (Long and Medium Term Operation Planning Model of Interconnected Hydrothermal Systems), DECOMP (Short Term Operation Planning Model for Interconnected Hydrothermal Systems) and SUISHI (Model of Simulation to Individualized Hydrothermal Subsystems Interconnected Plants) models already make use of this type of environment. Such models are widely used in the Brazilian Electrical System, with emphasis on NEWAVE in the Medium-Term Energy Operation Planning and Expansion Planning Studies. They also have huge demands on CPU time and tasks that can be performed simultaneously over time. Once modified, the execution of the programs is done by several processes instead of a single one. These processes communicate with each other through messages, sending and receiving data, and one of these processors is responsible for organizing these messages.

The identification of the tasks to be paralleled can be exemplified by what has been done in the NEWAVE program. This program is applied to solve a long and medium term planning problem (5 to 15 years of planning horizon) and uses stochastic dual dynamic programming (SDDP) as the solution technique. The solution process consists of determining an operating policy so that at each step of the planning period, given an initial state, generation targets for each subsystem are obtained, in order to minimizethe expected value of the total operation cost of operation over the planning horizon, and also taking into account a given risk measure.

This policy, represented by the future cost function (FCF) at the end of each time step (stage), is approximated by a piecewise linear function produced iteratively by Benders cuts. At each stage, and for each state of the system (storage levels and inflows in the previous months), the hydrothermal coordination problem is modeled as a linear program (LP), and the dual variables associated to the solution of this problem are used for the construction of the Benders cuts (backward step).

In order to obtain an estimate of the expected operational cost over the planning horizon, the operation of the system is simulated for the different inflow scenarios, considering the FCF obtained at each time step, (forward step). Since this method is composed by independent processes, the software takes advantage of this characteristic to distribute the various LP problems involved in the backward and forward stages between multiple processors.

In the backward stage, at every stage of the planning horizon, each process receives a set of operation dispatch problems associated with different energy inflow scenarios. After obtaining the solution to all these problems, each process produces a Benders cut that is sent to a main process. After receiving all Benders cuts, this process generates the FCF which will be used in stage (t-1) and transmits it to all other processes involved in the calculation (Figure 1).

Figure 1 - Distribution of Benders cuts in the backward process

Likewise, in the forward step, at each step, the processes receive a set of LP problems whose solutions are transferred to a main process to check convergence (Figure 2).


Figure 2 - Distribution of the Benders cuts in the forward process

If convergence has not been achieved, a new iteration is run in a distributed manner until final convergence is reached. Communications between processes are established using MPI (Message Passing Interface) instructions.

The current version of the software uses a parallelization strategy with the following features: (1) allows dynamic distribution of LP solutions, optimizing load balance between processes; (2) minimizes communication between processing cores; (3) introduces asynchronous message communication when grouping Benders cuts; (4) adjusts messages to be suitable for the new hardware structure with multi-core processors.
 
Similar to what was done with the NEWAVE program, we studied the source codes of the DECOMP and SUISHI programs, identifying the tasks that would be executed independently. These tasks were distributed among several processes using the MPI function library, allowing a large reduction in their computational time.

The relevance of this project can be measured by the significant reduction of execution time of the NEWAVE, DECOMP and SUISHI programs. The reduction of these computational times allowed the implementation of numerous improvements in their mathematical modeling, in addition to speeding up the accomplishment of several studies.
More recently, we have been working in asynchronous version of the SDPP and DPP algorithms in the NEWAVE and DECOMP models, respectively, in order to achieve a much larger scalability in our parallel processing scheme as the number of processes increase.

To run the software programs with distributed processing, Cepel has the computational infrastructure of the Intensive Computing Laboratory (Labcin).
 
Contact

Contact the responsible area via email:


 prodis@cepel.br