Draft Evaluating the required scenario set size for stochastic programming in forest management planning : incorporating inventory and growth model uncertainty

Developing a plan of action for the future use of forest resources requires a way to predict the development of the forest through time. These predictions require the use of inventory data and growth models that contain a large number of uncertainties. These uncertainties impact the quality of the predictions, and if not accounted for, they can lead to the selection of a suboptimal management plan. To account for and manage the uncertainties and associated risk, we have explored the use of stochastic programming. Stochastic programming can integrate uncertainty into the optimization process by solving the problem for a large number of potential scenarios of the forests future development. The selection of an appropriately sized set of scenarios involves a trade-off between tractability issues and problem representation issues. In this paper, an analysis of the trade-offs is conducted. Two cases are studied, one in which only the uncertainty of the inventory data is included and a second in which both grow...


Introduction
In forest management, the development of high quality plans requires detailed knowledge regarding the current situation of the forest holding, the ability to forecast the development of the holding, and the preferences of the stakeholders that may be included in the decision making process.The management plan is a set of future actions to be taken in the forest over a set period of time.In forest management planning these decisions could be the timing of a harvest or thinning, the planting, seeding or natural regeneration of a stand, or silvicultural treatments to promote the establishment and growth of a stand.

D r a f t 3
These management actions are applied to a potential future forest holding, predicted using growth models (for an example of the Finnish context for stand-level models see: Vuokila and Väliaho, 1980;Mielikäinen, 1985;Huuskonen and Miina, 2007, for tree-level models Hynynen et al. 2002).Even with accurate models, forecasting the future is never error-free (Diebold, 2001).The growth models predict the future state of the forest using the most recent inventory of the forest holding.As the current state of the forest is uncertain, this will have an additional impact on the predictions of future forest resources.Even using current state-of-the-art forest inventory techniques, there is still a minor but significant level of uncertainty (Naesset 2004;Packalén and Maltamo 2007).Thus, the forecast of forest resources will be imperfect even if perfect growth models of forest resources were available.Moreover, the growth process includes natural variability (e.g.weather) which cannot be reduced with improved models (Ferson and Ginzburg 1996).
Evaluating the uncertainty of forest inventory methods and growth models is an ongoing research area.
Both inventory method and growth models should be validated by obtaining estimates of the precision and accuracy of the model or method (Gregoire and Valentie 2008;Pretzch 2009).Sample based inventory methods utilize analytic formulas to assess accuracy, while the assessment of the stand level inventory is often compared to field data of very high quality.The assessment of the accuracy of growth models are evaluated through time series data.Haara and Leskinen (2009) have evaluated the effects of both uncertainties, and used the information to update stand-level inventory data.
When creating optimized forest management plans, the standard method of handling the uncertainties contained within the models and data has been to conduct the optimizations under the assumption that there was no uncertainty.These assumptions allowed for the use of relatively simple deterministic optimization processes, through the use of linear programming or heuristic optimization approaches (Kangas et al. 2008).However, some studies have been conducted in managing uncertainty.For D r a f t instance, some researchers have focused on ensuring the feasibility of meeting the demands of the constraints under uncertainty.Hof and Pickens (1991) used chance-constrained and chance maximizing approaches to minimize the probability of not meeting a specified demand target.Palma and Nelson (2009) used robust optimization to ensure feasibility of yield and demand of timber constraints subject to a specified protection level, while maximizing net present value.The use of stochastic goal programming (Eyvindson and Kangas, 2014) has been used to develop forest management plans which minimize the expected deviations from the targets set for those criteria of interest to the decision maker.Kangas et al. (2014a) have utilized a two-stage stochastic programming method to define the optimal measurement strategy to maximize the net present value of the forest holding.Additionally, Kangas et al. (2015) have utilized a multi-stage stochastic programming framework to balance between the cost of obtaining higher quality (or perfect) information and the improvement in the decisions made.
Stochastic programming has additional properties which makes it more attractive than the deterministic counterpart.As stochastic programming deals with the uncertainty of the problem, this allows for the management of different kinds of risk (Birge and Louveaux 2011).Different approaches to risk management could be utilized, such as the minimization of the downside mean semideviation (Krzemienowski and Ogryczak 2005) or the Conditional Value at Risk (Rockafeller and Uryasev 2000).The selection of the specific risk measure used should depend upon both the general problem formulation preferences and risk preferences of the decision maker.
Earlier studies have evaluated the value of information (VOI), or the losses of making improper decisions for problems which maximize net present value (NPV).Holmström et al. (2003) and Eid et al. (2004) used a cost-plus-loss analysis to evaluate different sampling strategies and inventory methods.Alternatively, Bayesian decision analysis (Lawrence 1999) could also be used to evaluate the D r a f t VOI.In a forestry context, Kangas (2010) used Bayesian decision analysis to evaluate the value of improving the accuracy of information.These methods evaluate the VOI on a stand level, and cannot include holding level constraints, such as maintenance of a specified amount of old growth forest or even-flow requirements.When these kinds of constraints are included, the VOI can be calculated through stochastic programming (chapter 4 of Birge and Louveaux, 2011).
The use of stochastic programming in forest science has primarily focused on evaluating the impacts of uncertain events such as forest fires (Gassmann 1989;Boychuk and Martell 1996).The research has focused on how to prepare for fire events and manage the risk (Hof and Pickens 1991;Bevers 2007).In a forest management context, Eriksson (2006) highlighted how stochastic programming could be used to plan for the uncertainty regarding climate change.
Depending on how the stochastic program is structured, adaptive management alternatives can be integrated into the optimization process (Lohmander 2007).Adaptive management is a structured process of learning about the problem and adapting the management decisions based on that learning (Williams 2011).In a stochastic process, learning could be interpreted as the resolution of some uncertainty.Two-stage or multi-staged stochastic programs with general recourse (rather than simple recourse) could integrate possible outcomes of learning and suggest appropriate management decisions based on the different possible outcomes of what had been learned.In the case of inventory / growth model uncertainty, learning and adaptive management would require including probability of a new inventory into the problem formulation (Kangas et al. 2014(Kangas et al. , 2015)).
Even when new inventory is possible, the choice of utilizing general recourse over simple recourse should depend upon the potential improvement to the decision process and the tractability of the problem.Allowing for the resolution of uncertainty through additional stages adds complexity to the D r a f t 6 model.When adding complexity to any model, the issue of parsimony must be addressed.In a decision framework, the question must be asked: Does this improve the quality of the decision enough to justify the added complexities?Sometimes it is simply not of much benefit to create a general recourse problem, because you can simply re-run the simple recourse problem once the uncertainty becomes resolved.Thus, even though options for adaptive management may not be explicitly detailed in the optimization process, this does not preclude the option of, when uncertainty becomes resolved, that the model can be re-run utilizing the updated information.If there is no possibility of resolving uncertainty, simple recourse is the only option.Stochastic programming incorporates the known or assumed uncertainty directly into the problem formulation (Birge and Louveaux 2011).When the estimate of uncertainty is a continuous variable, the uncertainty can be approximated deterministically through several different methods, such as Monte Carlo methods (Shapiro 2003) or Quasi-Monte Carlo methods (Lemieux 2009).Through the creation of a scenario tree, the stochastic program becomes a deterministic approximation, and can be solved using standard linear programming methods.When generating the scenario tree, there are trade-offs which must be considered.If the number of scenarios required to approximate the problem is too large, then the problem may be intractable (King and Wallace 2012).On the other hand, if too few scenarios are used, then the problem may not have an appropriate representation.One approach to manage this problem is to use an approach which optimizes the development of the scenario tree, to satisfy particular statistical properties (Gülpinar et al.

2004).
The ability to quantify how well a set of scenarios approximates the true stochastic problem is an area which has had significant attention in the stochastic programming community.In general, determining how well the approximation represents the 'true' problem is based on finding a statistical bounds for the optimal solution; other tools for evaluating the scenario generation methods are detailed by Kaut D r a f t and Wallace (2007).For Monte Carlo scenario generation one method is to use the Sample Average Approach (Kleywegt et al. 2002) which uses an algorithm to determine the appropriate number of scenarios required to produce a solution with an acceptable optimality gap and variability.A variety of other methods can also be used to establish optimality bounds for the stochastic problem.Each method has different advantages and disadvantages, and the methods may be better suited to specific problem types and sampling methods used.For instance, Higle and Sen (1996) use stochastic decomposition for stochastic linear programs, which incorporate approximations which are statistically motivated.Mak et al. (1999) use Monte Carlo sampling and focus on estimating the optimality gap for discrete approximations of the stochastic programs.
The objective of this paper is to show how the problem formulation and uncertainty involved affects the required number of scenarios to ensure a specific optimality gap.The impact of how large the scenario size must be for a corresponding optimality gap is analyzed with respect to changing risk preferences (from risk neutral to risk averse) and to the level and nature of the uncertainty (only inventory errors, and both inventory and growth model errors).For stochastic programming problems, the number of scenarios required to approximate the true stochastic programming problem need not be excessive.In this paper, the SAA method is used to demonstrate the optimality gap between the estimated 'true' stochastic programming problem and the approximated stochastic program.The method relies upon statistical techniques which should be rather familiar to forest inventory specialists and forest biometricians.This paper details the SAA method, provides a case study, and discusses the importance of including uncertainty into forest management planning.

Methods
The Sample Average Approach Method D r a f t 8 The quality of the solutions created using different sized sets of scenarios is evaluated using the SAA method (Kleywegt et al. 2002).The SAA methodology allows us to approximate a stochastic program in such a way that allows us to estimate the optimality of the solutions it yields..The algorithm stops according to a specified stopping criterion, either when the optimality gap is small enough, or when the predefined number of iterations has been exceeded.
To calculate an estimate of the optimized objective function, we must first explain the general formulation.The general stochastic programming formulation follows: where W is a random vector with the probability distribution P, ࣭ is set of decisions, ‫,ݔ‪ሺ‬ܨ‬ ܹሻ is a function of two vector variables x and W, and ॱ ‫,ݔ‪ሺ‬ܨ‬ ܹሻ is the expected value function.
The optimal values for the solution to the problem will be denoted as ‫ݒ‬ * : [2] ‫ݒ‬ * ∶= min ௫∈࣭ ݂ሺ‫ݔ‬ሻ An approximation of the general stochastic programming formulation is the sample average approximation: where each ܹ , j=1,…,N, is a realization of the random vector W, and N is the number of realizations.
F indicates the problem formulation with only one scenario, while f represents the stochastic formulation.
The corresponding approximate optimal values for the solution to the problem will be denoted as ‫ݒ‬ ො ே : An estimator of v*, is the average of a large number of ‫ݒ‬ ො ே : [5] where M is the total number of repetitions of the algorithm.This estimator provides an expected lower bound of the true optimal value, and ‫̅ݒ‬ ே ெ is a conservative estimate (Mak et al. 1999).This bias monotonically decreases as N increases.
For each ‫ݒ‬ ො ே and for each m in M, there exists an optimal solution ‫ݔ‬ ො ே to the corresponding SAA problem.To calculate the estimator for the performance bounds, an estimator which uses a much larger number of scenarios can be used: where N´>> N.This is an unbiased estimator of ݂ሺ‫ݔ‬ ොሻ.
From these values, it is relatively easy to calculate the optimality gap.The true optimality gap is: The estimated optimality gap is calculated as: The variance of the estimates can be calculated, and the variance of ‫̅ݒ‬ ே ெ is estimated as: and the variance of the optimality gap,

ୀଵ
For a specific N, it is possible to estimate the probability that the solution produced will be within a specific optimality gap. [11] where ‫ܯ‬ is the total number times from m=1,…,M that ‫̅ݒ‬ ே ெ − ݂ መ ሺ‫ݔ‬ ො ே ሻ ≤ ‫̅ݒ݃‬ ே ெ , and g is the specific optimality gap under consideration.Through the calculation of the estimate of the variance, confidence limit can be produced.The estimated variance is calculated as: Then the 95 % confidence limit is calculated as: The algorithm (Kleywegt et al. 2002) is: 1-Select the initial sample sizes for N, N´, and M. M should be set to allow for an accurate estimation of the variances involved.
2-For m=1,…, M do the following steps: 4-If the estimates of the gap or variance of the gap estimator is larger than desired, increase the number of sample sizes of N and/or N´, and return to step 2.
5-Select the best solution ሺ‫ݔ‬ ොሻ amongst the candidate solutions ሺ‫ݔ‬ ො ே ሻ using a screening selection method and then stop.
To summarize, the goal of the SAA method is to find the number of scenarios required which provide solutions which are within the acceptable gap of the estimated optimal solution possible.

D r a f t
The forest planning problem An example of the SAA method will be applied to a forest management planning case.The objective of the problem is to maximize first period timber while ensuring an even-flow of timber provided by forest operations and ensuring continued even-flow possibilities past the planning horizon through an end inventory constraint (A more detailed analysis of a similar problem is presented in Eyvindson and Kangas 2015).The harvest scheduling problem used in this paper is a well-known traditional problem, which allows readers to focus on the application of the stochastic programming models.This approach can be used for any properly developed stochastic programming formulation.
The general problem can be formulated as: where the problem inputs are comprised of: c jkit is the quantity of timber harvested from stand j, schedule k, scenario i, and period t, PV jkiT is the productive value of the stand j in schedule k for scenario i at the end of the planning horizon and PV j0i is the productive value in the beginning of the planning horizon for scenario i.The productive value is the value of the forest stock predicted using the models from Pukkala (2005), using a set of variables as predictors (basal area, mean diameter at breast height, discount rate, site variables and timber price).‫ݓ‬ ௧ ି ‫ݓ(‬ ௧ ା ) and ‫ݓ‬ ா ି ‫ݓ(‬ ா ା ) are the weights associated with negative (positive) deviations, ߣ is a risk coefficient which is dependent on the DM's risk preferences.The λ balances between the deviations of the even-flow problem and the maximization of the first period timber.The number of periods is T (6), J is the number of stands and K j the number of treatment schedules for stand j, I is the total number of scenarios under consideration, and ‫‬ refers to the probability of scenario i occurring.The decision variables are: ‫ݔ‬ selects the proportion of stand j being managed using schedule k, where a schedule is a set of harvesting (e.g.thinning or final felling) or management actions (e.g.planting, fertilizing or pre-commercial thinnings) taken during the planning periods, ݀ ௧ ି (݀ ௧ ା ) is the negative (positive) deviation for scenario i, in period t, and is the negative (positive) deviation for the end-inventory value of the present value of the forests for scenario i.This is a simple recourse formulation which manages the downside risk of not meeting D r a f t either the even-flow constraints or end volume constraint.In this formulation, the weights reflect the importance of minimizing the expected negative deviations.
In this problem formulation, there are two key aspects which should be described in detail.The first objective is to maximize the first period harvest, and the second objective is to minimize the weighted negative deviations for the subsequent period harvests while also minimizing the weighted negative deviations of the end productive value of the forest.A trade-off exists between maximizing even-flow of harvests (within the planning horizon) and ensuring that the end productive value is maintained (ensures continued sustainability beyond the planning horizon).By ensuring that the productive value is maintained, the sustainability of obtaining future timber resources is secured.

Materials
A small forest holding will be used to demonstrate how the estimated optimality gap decreases as the number of scenarios increases.The forest holding is located in North Karelia, Finland, and is composed of 41 stands, with an area of 47.3 ha.The holding is composed mainly of Scots pine (Pinus sylvestris L.) with a minor component of both Norway spruce (Picea abies (L.) Karst.) and birch (Betula pendula and Betula pubescens).Key forest inventory information of the holding can be found in figure 1.The planning horizon is 30 years, with six 5-year periods.To calculate the productive value, a discount rate of 2% was used.
The set of scenarios were generated using a Monte Carlo process.To allow for a comparison between the amount of uncertainty contained in the set of scenarios two different processes were used.The first Monte Carlo process incorporated only the uncertainty caused by inventory measurements, while the D r a f t second process incorporated the uncertainty caused by both inventory measurements and growth modeling errors.The measurement errors were introduced into both the dominant height and basal area.
The errors were assumed to be normally distributed, had no bias and a relative standard error of 20%, which reflects state-of-the art inventory methods (Naesset, 2004).Recent studies have shown only a small variable correlation, thus the measurement errors were considered uncorrelated (Haara and Korhonen, 2004;Mäkinen et al. 2010).In this case, we used a stand-level growth model for the sake of simplicity.The growth model errors were assumed to follow a first-order autoregressive process (AR( 1)).For details on the exact process involved in simulating growth model errors, readers are referred to Phase II of the Materials and Methods section in Pietilä et al. (2010).
To highlight how the optimality gap improves as the number of scenarios used increases, the algorithm was run using a stopping criteria which relied on the number of iterations.For both cases N´ was set at 1,000 and M was set at 500.For the case where only inventory errors were introduced into the model, the algorithm was run a total of 16 times, using an N starting at one, increasing each iteration by one until ten, then increasing by an interval of five until reaching an N of 40.For the case where both inventory errors and growth model errors were introduced into the model, the algorithm was run a total of 15 times, using an N starting at ten , increasing each iteration by ten until 100, then increasing by an interval of 20 until reaching an N of 200.
As a method to analyse the influence of risk preferences has on the optimality gap, the algorithm was run using two different weights for λ.While the weights ‫ݓ(‬ ௧ ି , ‫ݓ‬ ௧ ା , ‫ݓ‬ ா ି and ‫ݓ‬ ா ା ) should be set by the decision maker, for this example a surrogate was used to approximate the risk-neutral case.The weights were set to the shadow prices for the special case when the inventory errors and growth model errors are assumed to be zero, and when the weights are set to an initially high arbitrary value.This model reflects the standard deterministic model currently used in forestry.The use of shadow prices as D r a f t weights is justified, as they describe an underlying implicit utility model (Lappi 1992).The algorithm was run for the cases where λ = 1.05 (the nearly risk neutral case) and λ = 4 (the moderately risk averse case).

Results
The impact of increasing the number of scenarios on the optimality gap can be clearly seen in figures 2 and 3.The cases where only inventory errors were included in the analysis are shown in figure 2, while figure 3 shows the cases where both inventory errors and growth model errors were included.The figures show the probability that the solution created with N will be within a specified tolerance limit from the optimal value (eq.11), with the 95% confidence interval as error bars (eq.13).
Both figures clearly show an increased proportion of small deviations in the optimality gap as the number of scenarios increase.This increase starts rather quickly for the case where only inventory error and a risk neutral perspective are taken.The number of scenarios required for the case where both inventory error and growth model error, and a risk-averse perspective is much greater if you want to have a very low optimality gap.This effect is seen in Table 1 through both the expected optimality gap ) and in the variance of the optimality gap (equation [10]).For both cases, as the number of scenarios increase, both the optimality gap and variance decrease quickly.

Discussion
The results of the SAA method clearly highlight the trade-off between solution tractability and the optimality gap in the stochastic solution.For all cases, as the sample size increases the optimality gap steadily decreases.The sample size required to maintain a specified optimality gap between cases increases dramatically as both the quantity of uncertainty increases and as the importance of managing the downside risk increases.For both of these problems, a sample size of 200 scenarios can ensure an optimality gap of less than 2%.
An interesting aspect of the analysis highlights the need for an increased number of scenarios as importance of managing risk increases.This is clearly shown in figures 2 and 3 with a movement from λ = 1.05 (relatively risk neutral) to λ = 4 (relatively risk averse).To maintain a similar optimality gap, the number of scenarios required to approximate the problem needs to roughly increase ten-fold (Table 1).This requirement to increase the amount of scenarios to appropriately manage the risk features of the problem is very reasonable.For the risk neutral case, the negative deviations are not very important, and the emphasis is on maximizing the first period timber.For the risk-averse case, the negative deviations are rather important, and the set of scenarios to approximate the problem must include a more diverse set of scenarios.As risk management is a key differentiation between deterministic and stochastic programming, it is important to acknowledge the importance accounting for risk has on the required size scenario set.
The case study analysed in this paper is rather small, involving only 41 stands.While small, this problem can be considered representative of the typical forest holding by private individuals in Finland.
The results of this study highlight the idea that the use of stochastic programming for small-scale D r a f t owners can be technically implemented into current forest management tools.While the use of stochastic programming is technically possible, the value of using stochastic programming needs to be presented to both the forest owners and the forest planners.This is perhaps a greater challenge than the technical challenges of integrating stochasticity to the management tools.It will require a shift from optimizing a forest holding in a deterministic setting, to optimizing a forest holding in a stochastic setting and accounting for the forest owners risk preferences towards risk.
For larger scale forest management problems, such as regional plans or a forest company level plan, computational power required to solve this problem can be immense.Thus, approaches which enable the use of large computational power may ease the problem.Alternatives such as parallel computing and using computational grids (Linderoth and Wright 2003) could be used to focus computational power onto the problem.Alternatively, stochastic decomposition methods (Higle and Sen 1996) could be used as an alternative to the SAA method, which focuses on limiting the size of the stochastic programming problem.
The development of large forest management plans also tend to focus on strategic level concepts, thus the problem formulation may be significantly different.Some of the large management plans may not be possible to solve using deterministic approaches, and they may require a hierarchical approach to planning to solve the problem (Kangas et al. 2014b).Thus, in a similar fashion, it could be possible to separate the stochastic programming problem into a hierarchical structure, which solves a set of relatively small problems.
The stochastic programming example used in this study utilized simple recourse.Simple recourse models evaluate the penalty associated with not achieving the specified goals.In this case, the penalty was the expected negative deviations from achieving an even-flow of harvested timber.More advanced D r a f t models could be developed which allow for the resolution of some of the uncertainty.For instance, a two-stage stochastic model could identify the optimal time to conduct the next inventory.This would allow for a resolution of the growth model uncertainties, but would depend upon the cost of conducting the next inventory, the interest rate and the risk preferences of the decision maker.
Once solved, the stochastic program provides a single implementable set of decisions for the forest holding.To allow for an adaptive management of the forest, a new plan could be generated.The frequency for the needs of new plans would depend on the demands of the decision maker.Even though the plan is for a longer time horizon, this does not imply that the plan needs to be followed for the entire horizon.In fact, the decision maker would be wise to update his/her forest plan when preferences change, or when some uncertainty has been resolved (i.e. after conducting management decisions, or after an updated inventory).
The results highlight the quick decrease in the optimality gap and variance as the number of scenarios increase.The decrease is more rapid for the case where there are fewer errors included, and when there is less of an importance in risk management in the problem formulation.Through the SAA method it is possible to highlight the drastic improvement in the solution compared to the deterministic solution.
The improvement from the deterministic case to the stochastic case can be viewed for the special cases where only one scenario was used to represent the distribution (when λ = 1.05).For these cases, the optimality gap and variance is rather large, and the decrease in these values highlights the improvement in the objective just by including the uncertainties into the analysis.So even for those cases where the optimality gap is large, the solution to the stochastic programming formulation is a drastic improvement from the deterministic equivalent.

D r a f t 20
For future research, there are numerous factors and considerations which should be analyzed.In this case, we examined the specific case where the error distributions were unbiased and were normally distributed.It would be interesting to examine if the improvement in the solution is as dramatic if the growth model errors were biased, or if the inventory errors were biased (e.g.overestimating BA in a sparse stand and underestimating BA in a dense stand).It is also possible to examine the influence of using different distributions to describe the errors.For instance, an analysis could be done to estimate the cost associated with using a normal distribution when the "true" distribution is something else (e.g.log-logistic distribution).Additionally, the impact of the use of different risk measures could also be more demanding (e.g.Conditional Value at Risk) and they may require larger scenario sets to produce solutions with an acceptable optimality gap.

Conclusions
Determining how many scenarios are required to deterministically approximate the stochastic forest management problem depends upon several key factors.These are the amount of uncertainty integrated into the problem, the risk preferences of the decision maker and the acceptability threshold of the decision maker for an optimality gap.For this study, only the inventory error and growth model errors were included in the analysis.Other uncertainties could also be included, such as the impacts of climate change, the potential of natural damage (e.g.storms, insects or fire) and timber price variations.
Acquiring the risk preferences of the decision maker should not be restricted to lessen the need for additional scenarios.If technical constraints limit the quality solution, the impact should be on the optimality gap rather than on how the decision maker formulates the problem.Thus, while structuring and preparing the stochastic problem is much more intensive than the deterministic counterpart, it is important to remember that the decision will be more robust as it is based on additional relevant information.

2. 1 .
Generate a sample size of N and solve the problem to calculate ‫ݒ‬ ො ே (equation [4]) and the solution ‫ݔ‬ ො ே .2.2.Generate a sample size of N´ and calculate ݂ መ ே ᇲ ሺ‫ݔ‬ ො ே ሻ (equation [3]).3-Estimate the optimality gap for all m in M using equation [8] and the variance of the gap estimator (equation [10]).

Figure 1 .Figure 2 .Figure 3 .
Figure 1.(a) age-class distribution of the holding at the beginning of the planning period; and (b) diameter distribution at 1.3 m height at the beginning of the planning period.

Table 1 .
The estimates of the optimality gap and variance for the two different risk preferences analysed and for the two different