Keywords

Data Envelopment Analysis (DEA) has already a long history. The seminal article by Farrell (1957) aimed to develop a comparative measure for production efficiency. This work extended toward DEA, proposed by Charnes et al. (1978) who presented a quantitative measure for assessing the relative efficiency of DMUs using a frontier method that aims to determine the maximum volume of outputs, given a set of inputs. It is then possible to assess ex post the (in)efficiency of a production system using the distance to the production frontier (without any explicit assumptions on the production technology concerned). This is usually a deterministic analysis, which has a close resemblance to nonparametric linear programming.

In parallel, a stochastic frontier analysis (SFA) was proposed by Aigner et al. (1977) and Meeusen and Broeck (1977) who presented a parametric and stochastic approach. This approach assumed that the product function includes stochastic components which describe random shocks, such as climate or geographical factors.

A comparison of the characteristics of these approaches is provided in Table 2.1 (for more detail, see Hjalmarsson et al. (1996) and Bogetoft and Otto (2011)).

Table 2.1 Characteristics comparison of DEA and SFA

Over the years, DEA has become a popular method based on the abovementioned merits and has been used as an operational tool for analyzing efficiency problems in both the private and the public sector, where (in)efficiency is interpreted as the relative distance from an actual situation to the production frontier function.

DEA was fully developed by Charnes et al. (1978) and later on by Banker et al. (1984) to analyze the efficiency of a decision-making unit (DMU), as well as to determine improvements in performance on the basis of a DMU’s appropriate choice of radial projection – using the ratio of the weighted sum of outputs to the weighted sum of inputs – given that these ratios are less than (or equal to) 1 for each DMU under consideration. The main goal is to determine the weights corresponding to each DMU in such a way that they lead to a maximum efficiency improvement.

DEA has ever since become a popular method often used in the literature to study the relative efficiency of DMUs through the use of comparative benchmarks. Emrouznejad and Thanassoulis (1997) identified already some 1500 applications of DEA, and a more recent overview by Seiford (2005) mentions some 2800 published articles on DEA. This large number of studies shows that comparative efficiency analysis has become an important topic in operational research, public policy, energy-environment management, and regional development (see for overview of DEA also Cooper et al. 2006).

A weak element in a standard DEA model is that all efficient DMUs get a score 1, so that there is no way to differentiate between them. This has led to focused research to further discriminate between efficient DMUs, in order to arrive at a ranking, or even a numerical rating of these efficient DMUs, without affecting the results for the non-efficiency. In particular, Andersen and Petersen (1993) developed a radial Super-Efficiency model, while later on Tone (2001, 2002) designed a slacks-based measure (SBM) of super efficiency in DEA. In general, a Super-Efficiency model aims to identify the relative importance of each individual efficient DMU, by designing and measuring a score for its “degree of influence,” if this efficient DMU is omitted from the efficiency frontier (or production possibility set). If this elimination really matters (i.e., if the distance from this DMU to the remaining efficiency frontier is large) and, thus, the DMUs concerned has a high degree of influence and outperforms the other DMUs, it gets a high score (and is thus super efficient). Therefore, for each individual DMU, a new distance result is obtained, which leads to a new ranking or even a rating of all the original efficient DMUs.

Great interest has been shown in DEA during the last two decades, with major progress made in both methodological terms and in a range of applications (see, e.g., Cook and Seiford 2009, for an overview). A prominent contribution in DEA has been the successful integration of DEA and MOLP (multi-objective linear programming) models (see, e.g., Belton 1992; Belton and Vickers 1993; Doyle and Green 1993). Most of the research was inspired by the pioneering work of Golany (1988), who tried to find efficient solutions in order to map out the efficiency frontier in an interactive way. Later on Kornbluth (1991) was able to show the similarity between DEA problems and fractional multiple objective quadratic programming (MOQP) problems. This similarity holds for both input-oriented and output-oriented models.

Most contributions have their origin in the standard CCR model or in the Banker et al. (1984) (abbreviated hereafter as the BCC) model, which provide the foundations of DEA. All such models aim to find an appropriate projection for an efficiency improvement for each inefficient DMU, based on a radial projection in which the input volumes are reduced (or the output values are increased) by a uniform ratio.

The existence of many possible efficiency-improvement solutions has in recent years prompted a rich literature on the methodological integration of the MOLP and the DEA models. As mentioned, the first contribution was made by Golany (1988) who proposed an interactive MOLP procedure which aimed at generating a set of efficient points for a DMU. This model allows a decision maker to select the preferred set of output levels, given the input levels. Next, Thanassoulis and Dyson (1992) developed adjusted models which can be used to estimate alternative input and output levels in order to render relatively inefficient DMUs more efficient. These models are able to incorporate preferences for a potential improvement of individual input and output levels. The resulting target levels reflect the user’s relative preference over alternative paths to efficiency. Joro et al. (1998) demonstrated the analytical similarity between a DEA model and a reference point model in an MOLP formulation from a mathematical viewpoint. In addition, the reference point model provides suggestions which make it possible to search freely on the efficient frontier for good solutions or for the most preferred solution based on the decision maker’s preference structure. Later on, Halme et al. (1999) developed a value efficiency analysis (VEA), which included the decision maker’s preference information in a DEA model. The foundation of VEA originates from the reference point model in a MOLP context. Here the decision maker identifies the most preferred solution (MPS), so that each DMU can be evaluated by means of the assumed value function based on the MPS approach. A further development of this approach was made by Korhonen and Siljamäki (2002) who dealt with several practical aspects related to the use of a VEA. In addition, Korhonen et al. (2003) developed a multiple objective approach which allows for changes in the time frame. And, finally, Lins et al. (2004) proposed two multi-objective approaches that determine the basis for the incorporation of a posteriori preference information. The first of these models is called MORO (multiple objective ratio optimization), which optimizes the ratios between the observed and the target inputs (or outputs) of a DMU. The second model is MOTO (multiple objective target optimization), which directly optimizes the target values.

An original contribution to DEA using stepwise preference information from a DMU was made by Seiford and Zhu (2003), who developed a gradual improvement model for an inefficient DMU. This “context-dependent” (CD) DEA has an important merit, as it aims to reach a stepwise improvement through successive levels toward the efficiency frontier. This approach is certainly important, if an unambiguous decision maker can be identified, e.g., in the private sector. In many cases, however, we are faced with a fuzzy decision situation where there is no clear decision authority (e.g., public welfare). In such circumstances, we have to resort to an approach that is not based on value judgments. In this regard, Angulo-Meza and Lins (2002) make the following observations:

There are disadvantages in the methods that incorporate a priori information, concerning subjectivity:

  • The value judgments, or a priori information can be wrong or biased, or the ideas may not be consistent with reality.

  • There may be a lack of consensus among the experts or decision makers, and this can slow down or adversely affect the study.

Indeed, one may want to preserve the DEA spirit in the sense of not including a priori information (p. 232).

Given these considerations, we propose in our study a new improvement projection model called the distance friction minimization (DFM) approach, which does not need to incorporate the value judgment of a decision maker. A generalized distance friction function will be presented to identify an appropriate movement toward the efficiency frontier surface. The direction of this efficiency improvement depends on the input/output data characteristics of the DMU. Each of these characteristics may have a different weight for the DMU. To achieve an appropriate rise in efficiency, we will take into account the most appropriate input/output weights of these characteristics. We can then define the projection functions for the minimization of the distance friction, using a Euclidean distance in weighted spaces. This model will use the elements of a multiple objective quadratic programming (MOQP) model. This approach has several advantages: there is no need to rely on the subjective preference information of a DMU; the use of the restrictive radial projection methods is not necessary, and the DFM model is able to treat simultaneously both input reduction and output increase choices. Furthermore, our DFM model can lead to a methodological integration with a radial Super-Efficiency model to mitigate the abovementioned problems. This idea will be further unfolded and applied in subsequent chapters.