Skip to main content
Log in

Trust and adaptive learning in implicit contracts

  • Original Paper
  • Published:
Review of Managerial Science Aims and scope Submit manuscript

Abstract

Trust is a phenomenon that still is quite rarely investigated in agency theory. According to a common intuitive reasoning, trust should develop over time and it should evolve even in finite implicit-contract relationships. However, if the contracting parties are fully rational, theory cannot explain this. We therefore extend the standard model and develop a model of a finite relationship where the principal promises to pay a voluntary period-by-period bonus if the agent has worked according to the implicit agreement. The agent is boundedly rational and unable to foresee the principal’s future bonus decisions. The principal is, with some probability, honest and pays a promised bonus even in situations where ex-post cheating would be optimal. Based on the agent’s adaptive learning process, we show how trust evolves depending on the principal’s bonus-payment strategy. Depending on different levels of the agent’s bounded rationality, we derive the principal’s optimal pure strategy as part of a unique equilibrium. In an extension we show that the results are robust if the agent has bounded recall. The optimal strategy pattern mirrors a subset of trigger strategies which is exogenous in the standard model. Our findings imply that subjective incentives are more effective with increasing tenure of employees, or, that the optimal level of trust depends on how fast work environments change.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. Various journals published special issues on “trust”, e.g. Academy of Management Review (1998), Journal of Economic Behavior and Organization (2004), or Organization Science (2003).

  2. See also Kidder and Buchholtz (2002) with a special focus on executive compensation. They argue that a violation of a relational contract, i.e., trust abusing behavior, reduces executives’ stewardship incentives.

  3. See Kaplan and Norton (1996, 2001) with reference to the balanced scorecard concept.

  4. An alternative way to make subjective bonus payments credible are bonus pools. See Baiman and Rajan (1995) or Rajan and Reichelstein (2009).

    MacLeod (2003) analyzes problems that arise when different perceptions between the organization (principal) and the employee (agent) are present, whereas Mitusch (2006) deals with the principal’s ability to produce “hard facts”, i.e., verifiable performance measures.

  5. In models with pure strategies, “trust” is either perfect, or there is no trust at all.

  6. See also Gürtler (2006).

  7. Camerer and Weigelt (1988, p. 2) suggest it is plausible to assume principals (firms) are able to compute these sequential equilibria, possibly by the help of consultants, but agents (employees) are less likely to calculate them.

  8. Psychological research suggests that if one is unable to calculate exact probabilities and strategies, observed behavior is often the best predictor. (See March (1994, p. 13)).

  9. See also Casadesus-Masanell and Spulber (2007).

  10. This is a very common definition of trust. See, Nooteboom (2006, p. 249).

  11. See also Kreps (1990, p. 102f).

  12. Formally, the contract of period t can be thought of as consisting of an explicit fixed payment s t , an implicit effort level e t , and an implicit bonus b t . If the agent has performed the pre-specified level of effort in period t, he is eligible for bonus payment. Instead of including the desired effort level explicitly into the contract we let the agent’s effort being induced via the bonus function b t  = v t e t , depending on the observed effort level e t . As both parties are risk-neutral the principal can induce every desired level e t between 0 and 1 via the bonus function at the same cost as with directly writing this level into the contract. Consequently, there is no loss of generality in using this approach.

  13. There are a number of experimental studies showing that individuals fail to correctly apply backward induction (see, e.g., Binmore et al. 2002; or Johnson et al. 2002), or do not plan ahead (Hey and Knoll 2007).

  14. Note that, from an agent’s point of view, the principal’s decision to pay the bonus is a draw from a Bernoulli distribution with γ as the unknown parameter. As this parameter has a beta distribution with (α, β), the draw can be used to update the agent’s probability assessment of γ (DeGroot 1970, p. 160).

  15. See Kramer (1999) for further explanations and references for its validity.

  16. A more general stochastic structure with the first two properties could be used to arrive at qualitatively identical results. For unbounded recall it would still be analytically traceable but not for the analysis of bounded recall. To obtain closed-form solutions in both cases without changing the stochastic structure we decided in favor of the beta distribution.

  17. Notice that by definition \(S_{t}^{D}\left( \varvec{ \theta }^{\ast \ast }\right) =S_{t}^{D}\left( \varvec{ \theta }^{\ast }\right) \).

  18. The same effect occurs if a sample from a normal distribution is used to update the mean of a normally distributed variable: The higher the variance of the prior distribution, the stronger the impact of the sample on the posterior mean.

  19. See Wixted (2004) for a review of-and for non-psychologists an introduction into-the topic.

  20. Basu and Waymire (2006) and Basu et al. (2007) show that recordkeeping-e.g., as required by modern accounting systems- enhances trust and therefore enables complex economics transactions. The need for recordkeeping to support memory by the human brain sustains our assumption of limited recall in the first place. As in practice, however, recordkeeping in an employer–employee relationship will not be observed, we do not consider formal recordkeeping as a device to support a more effective recall by the agent.

  21. Ittner et al. (2003) provide an empirical example where individual balanced scorecards were removed because employees did not trust the scorecard measures anymore after supervisors ignored a number of them or attached different weights to them from quarter to quarter.

  22. See McKnight et al. (1998) for a (non-analytical) model of initial trust formation and the references to empirical findings therein.

  23. See also Murphy and Oyer (2003).

  24. Cf. Wicks et al. (1999), p. 101.

  25. According to Rousseau (1990, p. 390), a psychological contract are individual beliefs regarding reciprocal obligations.

  26. Cf. optimal trust for τF = {4, 6} in Table 2.

  27. After the representative sequence has been played once, the surplus is the same for every future repetition due to the agent’s bounded recall.

  28. We suppress time indices in the surpluses of a representative sequence. Without discounting, the surplus is uniquely determined by the number of payments/non-payments in the sequence and the trust prevailing at these decisions.

  29. Assuming T > 2τF under strategy \(\varvec{ \theta }(\varvec{ \tau}^{F}, {\bf 0})\) or \(\varvec{ \theta }(\varvec{ \tau}^{F}{\bf -i},{\bf 0}),\) respectively, trust is completely destroyed at the end of period 2τF or (2τF − i), respectively, such that all future payoffs are zero.

References

  • Aumann RJ, Maschler MB (1995) Repeated Games with Incomplete Information. MIT Press, Cambridge

    Google Scholar 

  • Baiman S, Rajan MV (1995) The informational advantages of discretionary bonus schemes. Account Rev 70(4):557–579

    Google Scholar 

  • Baker G (1990) Pay-for-performance for middle managers: causes and consequences. J Appl Corp Fin 3:50–61

    Article  Google Scholar 

  • Baker G, Gibbons R, Murphy KJ (1994) Subjective performance measures in optimal incentive contracting. Q J Econ 109(4):1125–1156

    Article  Google Scholar 

  • Basu S, Waymire GB (2006) Recordkeeping and human evolution. Account Horiz 20(3):201–229

    Article  Google Scholar 

  • Basu S, Dickhaut J, Hecht G, Towry K, Waymire G (2007) Recordkeeping alters economics history by promoting reciprocity, Working paper

  • Binmore K, McCarthy J, Ponti G, Samuelson L (2002) A backward induction experiment. J Econ Theory 104(1):48–88

    Article  Google Scholar 

  • Boot AWA, Greenbaum SA, Thakor AV (1993) Reputation and discretion in financial contracting. Am Econ Rev 83(5):1165–1183

    Google Scholar 

  • Boyle R, Bonacich P (1970) The development of trust and mistrust in mixed-motive games. Sociometry 33(2):123–139

    Article  Google Scholar 

  • Bull C (1987) The existence of self-enforcing implicit contracts. Q J Econ 102(1):147–160

    Article  Google Scholar 

  • Butler JK Jr (1983) Reciprocity of trust between professionals and their secretaries. Psychol Reports 53:411–416

    Article  Google Scholar 

  • Camerer C, Weigelt K (1988) Experimental tests of a sequential equilibrium reputation model. Econometrica 56(1):1–36

    Article  Google Scholar 

  • Campbell D, Lee C (1988) Self appraisal in performance evaluation: development versus evaluation. Acad Manage Rev 13:302–314

    Google Scholar 

  • Campbell DJ, Campbell KM, Chia H-B (1998) Merit pay, performance appraisal, and individual motivation: an analysis and alternative. Hum Resour Manage 37(2):131–146

    Article  Google Scholar 

  • Casadesus-Masanell R (2004) Trust in agency. J Econ Manage Strategy 13(3):375–404

    Article  Google Scholar 

  • Casadesus-Masanell R, Spulber DF (2007) Agency revisited. Working Paper HBS and Northwestern University

  • Cripps MW, Mailath GJ, Samuelson L (2004) Imperfect monitoring and impermanent reputations. Econometrica 72(2):407–432

    Article  Google Scholar 

  • DeGroot MH (1970) Optimal statistical decisions. McGraw-Hill, New York

    Google Scholar 

  • Folger R, Konovsky M (1989) Effects of procedural and distributive justice on reactions to pay raise decisions. Acad Manage J 32:115–130

    Article  Google Scholar 

  • Forges F (1992) Repeated games of incomplete information: non-zero-sum. In: Aumann RJ, Hart S (eds) Handbook of game theory with economic applications. Amsterdam, North Holland, pp 155–177

  • Friedman JA (1971) A non-cooperative equilibrium for supergames. Rev Econ Stud 38(1):1–12

    Article  Google Scholar 

  • Fudenberg D, Maskin E (1986) The folk theorem in repeated games with discounting or with incomplete information. Econometrica 54(3):533–554

    Article  Google Scholar 

  • Gibbs M, Merchant KA, Van der Stede WA, Vargus ME (2004) Determinants and effects of subjectivity in incentives. Account Rev 79(2):409–436

    Article  Google Scholar 

  • Green EJ, Porter RH (1984) Noncooperative collusion under imperfect price information. Econometrica 52(1):87–100

    Article  Google Scholar 

  • Gürtler O (2006) Implicit contracts: two different approaches. Working Paper University of Bonn

  • Hedge JW (2000) Exploring the concept of acceptability as a criterion for evaluating performance measures. Group Organ Manage 25(1):22–44

    Article  Google Scholar 

  • Hey JD, Knoll JA (2007) How far ahead do people plan? Econ Lett 96:8–13

    Article  Google Scholar 

  • Hopwood AG (1972) An empirical study of the role of accounting data in performance evaluation. J Account Res 10:156–182

    Article  Google Scholar 

  • Ittner CD, Larcker DF (2003) Coming up short on nonfinancial performance measurement. Harv Bus Rev 81(11):88–95

    Google Scholar 

  • Ittner CD, Larcker DF, Meyer MW (2003) subjectivity and the weighting of performance measures: evidence from a balanced scorecard. Account Rev 78(3):725–758

    Article  Google Scholar 

  • John K, Nachman DC (1985) Risky debt, investment incentives, and reputation in a sequential equilibrium. J Fin 40(3):863–878

    Article  Google Scholar 

  • Johnson EJ, Camerer C, Sen S, Rymon T (2002) Detecting failures of backward induction: monitoring information search in sequential bargaining. J Econ Theory 104(1):16–47

    Article  Google Scholar 

  • Jones GR, George JM (1998) The experience and evolution of trust: Implications for cooperation and teamwork. Acad Manage Rev 23(3):531–546

    Google Scholar 

  • Jonker CM, Schalken JJP, Theeuwes J, Treur J (2004) Human experiments in trust dynamics. Lect Notes Comput Sci 2995:206–220

    Google Scholar 

  • Kaplan RS, Norton DP (1996) Using the balanced scorecard as a strategic management system. Harv Bus Rev 74(Jan–Feb):75–87

    Google Scholar 

  • Kaplan RS, Norton DP (2001) The strategy-focused organization. Harvard Business School Press, Boston, MA

    Google Scholar 

  • Kidder DL, Buchholtz AK (2002) Can excess bring success? CEO compensation and the psychological contract. Hum Resour Manage Rev 12(4):599–617

    Article  Google Scholar 

  • Kramer RM (1999) Trust and distrust in organizations: emerging perspectives, enduring questions. Ann Rev Psychol 50:569–598

    Article  Google Scholar 

  • Kreps DM, Wilson R (1982) Reputation and imperfect information. J Econ Theory 27(2):253–279

    Article  Google Scholar 

  • Kreps DM (1990) Corporate culture and economic theory. In: Alt JE, Shepsle KA (eds) Perspectives on positive political economy. Cambridge University Press, Cambridge, pp 90–143

    Chapter  Google Scholar 

  • Levin J (2003) Relational incentive contracts. Am Econ Rev 93(3):835–857

    Article  Google Scholar 

  • Lewicki R, Bunker BB (1995) Developing and maintaining trust in work relationships. In: Kramer RM, Tyler TR (eds) Trust in organizations: frontiers of theory and research. SAGE Publications, Thousand Oaks, pp 114–139

    Google Scholar 

  • Luhmann N (1979) Trust and power. Wiley, New York

    Google Scholar 

  • MacLeod B (2003) Optimal contracting with subjective evaluation. The Am Econ Rev 93(1):216–240

    Article  Google Scholar 

  • Mailath GJ, Morris S (2002) Repeated games with almost-public monitoring. J Econ Theory 102:189−202

    Article  Google Scholar 

  • Mailath GJ, Morris S (2006) Coordination failure in repeated games with almost-public monitoring. Theor Econ 1:311–340

    Google Scholar 

  • Mailath GJ, Samuelson L (2001) Who wants a good reputation? Rev Econ Stud 68(2):415–441

    Article  Google Scholar 

  • Mailath GJ, Samuelson L (2006) Repeated games and reputations: long-run relationships. Oxford University Press, Oxford

    Book  Google Scholar 

  • March JG (1994) A primer on decision making: how decisions happen. The Free Press, New York

    Google Scholar 

  • McKnight DH, Cummings LL, Chervany NL (1998) Initial trust formation in new organizational relationships. Acad Manage Rev 23(3):473–490

    Google Scholar 

  • Milkovich GT, Newman JM (2002) Compensation, 7th edn. McGraw-Hill, Irwin

    Google Scholar 

  • Mitusch K (2006) Non-commitment in performance evaluation and the problem of information distortions. J Econ Behav Organ 60(4):507–525

    Article  Google Scholar 

  • Murphy KJ, Oyer P (2003) Discretion in executive incentive contracts: theory and evidence. Working paper

  • Nooteboom B (2006) Forms, sources and processes of trust. In: Bachmann R, Zaherr A (eds) Handbook of trust research. Edward Elgar, Cheltenham, pp 247–263

    Google Scholar 

  • Phelan C (2006) Public trust and government betrayal. J Econ Theory 130:27–43

    Article  Google Scholar 

  • Prendergast C (1999) The Provisions of Incentives in Firms. J Econ Lit 37:7–63

    Article  Google Scholar 

  • Rajan MV, Reichelstein S (2009) Objective versus subjective indicators of managerial performance. Account Rev 84:209–237

    Article  Google Scholar 

  • Robinson SL, Rousseau DM (1994) Violating the psychological contract: not the exception but the norm. J Organ Behav 15(3):245–259

    Article  Google Scholar 

  • Robinson SL (1996) Trust and breach of the psychological contract. Ad Sci Q 41(4):574–599

    Article  Google Scholar 

  • Rosen S (1992) Contracts and the market for executives. In: Wessin L, Wijkander H (eds) Contract economics. Blackwell, Oxford, pp 181–211

    Google Scholar 

  • Rousseau DM (1990) New hire perceptions of their own and their employer’s obligations: a study of psychological contracts. J Organ Behav 11(5):389–400

    Article  Google Scholar 

  • Wicks AC, Berman SL, Jones TM (1999) The structure of optimal trust: moral and strategic implications. Acad Manage Rev 24(1):99–116

    Google Scholar 

  • Wiseman T (2008) Reputation and impermanent types. Games Econ Behav 62:190–210

    Article  Google Scholar 

  • Wixted JT (2004) The psychology and neuroscience of forgetting. Ann Rev Psychol 55:235–269

    Article  Google Scholar 

  • Worchel P (1979) Trust and distrust. In: Austin WG, Worchel S (eds) The social psychology of intergroup relations. Brooks/Cole Publishing, Monterey CA, pp 174–187

    Google Scholar 

Download references

Acknowledgments

We thank Holger Asseburg, Oliver Fabel, Alfred Luhmer, Barbara Schöndube-Pirchegger, Jack Stecher, the editor, two anonymous referees, and audiences at Bonn, Milan, and Rotterdam for helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christian Lukas.

Appendix

Appendix

Proof of Proposition 1

  1. (a)

    Differentiating \({\mathcal{S}}\left( \varvec{ \theta }\right) \) as given by Eq. 13 with respect to δ we obtain \(\frac{\partial {\mathcal{S}}\left( \varvec{ \theta }\right) }{\partial \delta } =\sum_{t=1}^{T}S_{t}\left( \varvec{ \theta }\right) \left( t-1\right) \delta ^{t-2}>0\). As \(\frac{\partial {\delta }} {\partial r}=-i\left( 1+r\right) ^{-\left( i+1\right) }<0\) it follows \(\frac{\partial {\mathcal{S}}\left( \varvec{ \theta }\right) }{\partial r}<0\). Let \(\varvec{ \theta }^{\ast }\) be the optimal strategy with r and \(\varvec{ \theta }^{\prime }\) the optimal strategy with r′ > r. As \({\mathcal{S}}\left( \varvec{ \theta }\right) \) is decreasing in \(r,{\mathcal{S}}\left( \varvec{ \theta }^{\prime },r^{\prime }\right) <{\mathcal{S}}\left( \varvec{ \theta }^{\prime },r\right), \) and as \({\mathcal{S}}\left( \varvec{ \theta }^{\prime },r\right) <{\mathcal{S }}\left( \varvec{ \theta }^{\ast },r\right) \) it follows \({\mathcal{S}} \left( \varvec{ \theta }^{\prime },r^{\prime }\right) <{\mathcal{S}}\left( \varvec{ \theta }^{\ast },r\right) \) and the principal’s equilibrium payoff decreases in r.

  2. (b)

    Consider period t with induced trust γ t−1. If non-payment is optimal in period t with interest rate r, given γ t−1, then non-payment must be optimal in period t with r′ > r given γ t−1, too, as the surplus function (or equivalently: the left hand side of the non-reneging constraint Eq. 12) is decreasing in r. Hence, starting with the same initial level of trust, γ0, the induced level of trust in period t with interest rate r, γ r t−1 , must be at least as high as with r′ > r, i.e., γ r t−1  ≥ \(\gamma _{{t - 1}}^{{r^{\prime } }}\) for all t; for γ r t−1  < \(\gamma _{{t - 1}}^{{r^{\prime } }}\)  to be true for some period t there must have been a period τ < t with γ r τ−1  = \(\gamma _{{t - 1}}^{{r^{\prime } }}\) and with non-payment for r but payment for r′: a contradiction. Hence, the number of periods \(\sum_{i=1}^{t}\theta _{i}\) in which the bonus is paid up to some period t is weakly decreasing in r.

  3. (c)

    A higher level of ex ante trust increases every summand in Eq. 13 such that \({\mathcal{S}}\) is increasing in γ0. \(\square\)

Proof of Lemma 1

Writing payoffs S t as a function of induced trust γ t−1 at the beginning of period t transforms Eq. 13 into \({\mathcal{ S}}^{D}=\sum_{t=1}^{T}S_{t}^{D}(\gamma _{t-1}^{{\bf 0}})\delta ^{t-1}\) for strict non-payment, \(\varvec{\theta}={\bf 0}\), and \({\mathcal{S}} ^{H}=\sum_{t=1}^{T}S_{t}^{H}(\gamma _{t-1}^{{\bf 1}})\delta ^{t-1}\) for strict payment, \(\varvec{\theta}={\bf 1},\) where \(\gamma _{t-1}^{\varvec{ \theta } } \) indicates the level of trust at the beginning of period t contingent on \(\varvec{\theta}\in \left\{ {\bf 0,1}\right\} \). To show how \( {\mathcal{S}}^{D}\) and \({\mathcal{S}}^{H}\) behave depending on the principal’s lifetime T we write them explicitly as a function of T, \({\mathcal{S}} ^{D}(T)\) and \({\mathcal{S}}^{H}(T)\). For expositional brevity assume T is continuous. Surpluses become \({\mathcal{S}}^{D}(T)=\int_{t=1}^{T}S_{t}^{D}( \gamma _{t-1}^{{\bf 0}})\delta ^{t-1} dt \) and \({\mathcal{S}} ^{H}(T)=\int_{t=1}^{T}S_{t}^{H}(\gamma _{t-1}^{{\bf 1}})\delta ^{t-1}dt \). The first derivatives with respect to T obtain as

$$ {\frac{d{\mathcal{S}}^{D}(T)}{dT}}=S_{T}^{D}(\gamma _{T-1}^{{\bf 0}})\delta ^{T-1}>0;\quad {\frac{d{\mathcal{S}}^{H}(T)}{dT}}=S_{T}^{H}(\gamma _{T-1}^{{\bf 1}})\delta ^{T-1}>0; $$

and second derivatives as

$$ \begin{aligned} {\frac{d^{2}{\mathcal{S}}^{D}(T)}{dT^{2}}}\,=\,&\delta ^{T-1}{\frac{dS_{T}^{D}}{ d\gamma _{T-1}^{{\bf 0}}}}\frac{d\gamma _{T-1}^{{\bf 0}}}{dT} +S_{T}^{D}(\gamma _{T-1}^{{\bf 0}})\delta ^{T-1}\ln \left( \delta \right) \\ {\frac{d^{2}{\mathcal{S}}^{H}(T)}{dT^{2}}}\,=\,&\delta ^{T-1}{\frac{dS_{T}^{H}}{ d\gamma _{T-1}^{{\bf 1}}}}\frac{d\gamma _{T-1}^{{\bf 1}}}{dT} +S_{T}^{H}(\gamma _{T-1}^{{\bf 1}})\delta ^{T-1}\ln \left( \delta \right). \end{aligned} $$

Notice that according to (1)

$$ \begin{aligned} \gamma _{T-1}^{{\bf 0}}&={\frac{\alpha }{\alpha +\beta +T-1}} \\ \gamma _{T-1}^{{\bf 1}}&={\frac{\alpha +T-1}{\alpha +\beta +T-1}} \end{aligned} $$

such that \(\frac{d\gamma _{T-1}^{{\bf 0}}}{dT}=-{\frac{\alpha }{\left( \alpha +\beta +T-1\right) ^{2}}}<0\) and \(\frac{d\gamma _{T-1}^{{\bf 1}}}{dT}={\frac{\beta }{\left( \alpha +\beta +T-1\right) ^{2}}}>0\). Furthermore, we know from Eqs. 14 and 15 that \({\frac{dS_{T}^{D}}{ d\gamma _{T-1}^{{\bf 0}}}}>0\) and \({\frac{dS_{T}^{H}}{d\gamma _{T-1}^{ {\bf 1}}}}>0\). Assuming r = 0, i.e., δ = 1 leading to ln(δ) = 0, it follows that \({\mathcal{S}}^{D}(T)\) is a strictly monotone increasing concave function of T and \({\mathcal{S}}^{H}(T)\) is a strictly monotone increasing convex function of T. Furthermore we know that \({\mathcal{S}}^{D}(1)>{\mathcal{S}}^{H}(1)\) and \(\lim_{T=\infty }{\frac{d {\mathcal{S}}^{D}(T)}{dT}}=0\) and \(\lim_{T=\infty }{\frac{d{\mathcal{S}}^{H}(T)}{dT }}=1/2\). Hence, there exists a threshold value for T such that strict payment dominates strict non-payment. The same result applies for positive but sufficiently small r.\(\square\)

Proof of Proposition 2

Assume r = 0 and consider strategy \(\varvec{ \theta }^{\prime }=\left(\ldots,0,1,\ldots \right) \) where the principal does not pay the bonus in period τ but does pay it in τ + 1. Now consider strategy \(\varvec{ \theta } =\left( \ldots,1,0,\ldots\right) \) where the principal pays in τ and does not pay in period τ + 1. Everything else equal we can compare strategies \(\varvec{ \theta }^{\prime }\) and \(\varvec{ \theta }\) by comparing their ex ante expected payoffs from period τ and τ + 1. We obtain

$$ \begin{aligned} {\mathcal{S}}(\varvec{ \theta })&=S_{\tau }^{H}\left( \gamma _{\tau -1}\right) +S_{\tau +1}^{D}\left( \gamma _{\tau }\right) \\ {\mathcal{S}}(\varvec{ \theta }^{\prime })&=S_{\tau }^{D}\left( \gamma _{\tau -1}\right) +S_{\tau +1}^{H}\left( \gamma _{\tau }^{\prime }\right). \end{aligned} $$

γτ−1  is the level of trust induced at the beginning of period τ, γτ τ ) is the level of trust at the beginning of period τ + 1 given payment (non-payment) in τ. We know that γτ > γτ −1 > γ τ . The difference of expected payoffs is

$$ {\mathcal{S}}(\varvec{ \theta })-{\mathcal{S}}(\varvec{ \theta }^{\prime })=\left[ S_{\tau }^{H}\left( \gamma _{\tau -1}\right) -S_{\tau +1}^{H}\left( \gamma _{\tau }^{\prime }\right) \right] +\left[ S_{\tau +1}^{D}\left( \gamma _{\tau }\right) -S_{\tau }^{D}\left( \gamma _{\tau -1}\right) \right] $$

As S D and S H are increasing in γ, both brackets [·] are strictly positive and therefore \({\mathcal{S}}(\varvec{ \theta })-{\mathcal{S}}(\varvec{ \theta }^{\prime })>0\). By induction, this argument also holds for any number τ of non-payments in a row. As long as r is sufficiently low the same logic applies for r > 0. Hence, if discounting is low and the principal’s optimal strategy exhibits τ payments, the optimal strategy pattern is \(\varvec{ \theta }^{\ast }=(\theta _{1}=1,\theta _{2}=1,\ldots,\theta _{\tau }=1,\theta _{\tau +1}=0,\ldots,\theta _{T}=0)\): Trust will be raised to its desired maximum until period τ and in the remaining periods the gains from trust formation will be earned by non-payments.\(\square\)

Proof of Proposition 3

  1. (a)

    The surplus Eq. 13 for strategy \(\varvec{ \theta }^{\ast }=(\theta _{1}=1,\theta _{2}=1,\ldots,\theta _{T-1}=1,\theta _{T}=0)\) obtains as

    $$ \begin{aligned} {\mathcal{S}}(\varvec{ \theta }^{\ast }) =\,&S_{1}^{H}(\gamma _{0})+\sum_{t=2}^{T-1}S_{i}^{H}(\gamma _{t-1}=1)\delta ^{t-1}+S_{T}^{D}(\gamma _{T-1}=1)\delta ^{T-1}\\ =\,&{\frac{\gamma _{0}}{2\left( 2-\gamma _{0}\right) }}+{\frac{1}{2}} \sum_{t=2}^{T-1}\delta ^{t-1}+{\frac{3}{2}}\delta ^{T-1}, \end{aligned} $$

    where γ0 denotes the prior level of trust. In the final period, the principal will not pay the bonus, i.e. θ T  = 0, because \( S_{T}^{D}(\theta ^{T-1})\geq S_{T}^{H}(\theta ^{T-1})\) for any history of play. Since alternating strategies are ruled out by Eq. 16, any strategy that has θ1 = 0, leads to a surplus of \({\mathcal{S}}(\theta _{1}=0,\theta _{2}=0,\ldots,\theta _{T}=0)={\frac{\gamma _{0}\left( 4-\gamma _{0}\right) }{2\left( 2-\gamma _{0})^{2}\right) }}<{\frac{3}{2}},\) which is clearly dominated by \({\mathcal{S}}(\varvec{ \theta }^{\ast })\) if r is sufficiently low (δ sufficiently close to 1). A strategy \(\varvec{ \theta }^{\tau }=(\varvec{\theta}_{{\bf 1}}={\bf 1},\theta _{2}=1,\ldots,\theta _{\tau }=1,\theta _{\tau +1}=0,\ldots,\theta _{T}=0),2\leq \tau \leq T-1,\) leads to a surplus \({\mathcal{S}}(\varvec{ \theta }^{\tau })={\frac{\gamma _{0}}{2\left( 2-\gamma _{0})\right) }}+{\frac{1}{2}}\sum_{t=2}^{\tau }\delta ^{t-1}+{\frac{3}{2}}\delta ^{\tau }\). Hence, \({\mathcal{S}}(\varvec{ \theta }^{\ast })-{\mathcal{S}}( \varvec{ \theta }^{\tau })={\frac{1}{2}}\sum_{t=\tau +1}^{T-1}\delta ^{t-1}- {\frac{3}{2}}\left( \delta ^{\tau }-\delta ^{T-1}\right) \) is positive for all τ ≤ T −1 if r is sufficiently low.

  2. (b)

    If r is sufficiently large, only the first period payoff of the surplus function matters. The first period payoff is maximized by non-payment, θ1 = 0. \(\square\)

Proof of Lemma 2

Assume r = 0. Consider a representative sequence \(\theta ^{R}\left( \tau ^{F},1\right) \) consisting of τF payments and one non-payment. The surplusFootnote 27 from the second or higher repetition of the representative sequence \(\theta ^{R}\left( \tau ^{F},1\right) \) is independent of the period in which the non-payment is placed within \(\theta ^{R}\left( \tau ^{F},1\right) \). At each payment the agent recalls τF −1 payments and one non-payment, and in the period of non-payment the agent recalls τF payments (full trust). The surplus is equal to \({\mathcal{S}}=\tau ^{F}S^{H}\left( {\frac{\tau ^{F}-1}{\tau ^{F}}}\right) +S^{D}\left( 1\right) \).Footnote 28

Now assume a second non-payment is optimal, d = 2. We consider two different strategies in placing the second non-payment. In strategy A it is placed immediately after the first non-payment, and in strategy B the two non-payments are not placed in a row. The surpluses related to strategies A and B are given by

$$ \begin{aligned} {\mathcal{S}}^{A}&=S^{D}\left( 1\right) +S^{D}\left( {\frac{\tau ^{F}-1}{\tau ^{F}}}\right) +\left( \tau ^{F}-1\right) S^{H}\left( {\frac{\tau ^{F}-2}{\tau ^{F}}}\right) +S^{H}\left( {\frac{\tau ^{F}-1}{\tau ^{F}}}\right) \\ {\mathcal{S}}^{B} &=2S^{D}\left( {\frac{\tau ^{F}-1}{\tau ^{F}}}\right) +\left( \tau ^{F}-2\right) S^{H}\left( {\frac{\tau ^{F}-2}{\tau ^{F}}}\right) +2S^{H}\left( {\frac{\tau ^{F}-1}{\tau ^{F}}}\right) \end{aligned} $$

The difference of surpluses is \({\mathcal{S}}^{A}-{\mathcal{S}}^{B}=S^{D}\left( 1\right) -S^{D}\left( {\frac{\tau ^{F}-1}{\tau ^{F}}}\right) -\left( S^{H}\left( {\frac{\tau ^{F}-1}{\tau ^{F}}}\right) -S^{H}\left( {\frac{\tau ^{F}-2}{\tau ^{F}}}\right) \right) \). Notice that \(1-{\frac{\tau ^{F}-1}{\tau ^{F}}}={\frac{\tau ^{F}-1}{\tau ^{F}}}-{\frac{\tau ^{F}-2}{\tau ^{F}}}={\frac{1}{\tau ^{F}}}\). As both S D and S H are increasing convex functions of γ (·), and as for the marginal surpluses it holds S D > S H for all \(\gamma _{\left( \cdot \right) },{\mathcal{S}}^{A}-{\mathcal{S}}^{B}\) is strictly positive. Hence, if a second non-payment is optimal it must be placed immediately after the first non-payment. The same argumentation applies if d > 2 non-payments are optimal. Hence, if a repetition of representative sequences \(\theta ^{R}\left( \tau ^{F},d\right) \) is consistent with equilibrium behavior, d non-payments must be placed in a row such that there is at most one change from payment to non-payment within a representative sequence. For positive but sufficiently low r the same result applies.\(\square\)

Proof of Proposition 4

Assume r = 0 for the whole proof. The proof consists of three steps:

  1. (a)

    We first prove that, independent of the initial trust γ0 at the beginning of the first period, it is always optimal to establish full trust right from the beginning of the relationship by selecting τF payments in a row.

  2. (b)

    We next prove that given full trust has been established right from the beginning of the relationship at least one non-payment is optimal.

  3. (c)

    Finally, we show that repetition of the representative sequence is optimal.

(a) Assume T > 2τF. We first show that strategy \(\varvec{ \theta }(\varvec{\tau}^{F},{\bf 0})=(\theta _{1}=\theta _{2}=\cdots=\theta _{\tau ^{F}}=1,0,0,\ldots,0),\) i.e., fulfill τF periods and then never again, dominates all strategies \(\varvec{ \theta }(\varvec{ \tau}^{F}{\bf -i},{\bf 0})=(\theta _{1}=\theta _{2}=\cdots=\theta _{\tau ^{F}-i}=1,0,0,\ldots,0),\,i=1,\ldots,\tau ^{F}-1\) if γ0 < 1. Strategy \( \varvec{ \theta }(\varvec{ \tau}^{F},{\bf 0})\) yields the surplusFootnote 29

$$ {\mathcal{S}}(\varvec{ \theta }(\varvec{ \tau}^{F},{\bf 0} ))=\sum_{t=1}^{\tau ^{F}}S^{H}(\theta ^{t-1})+\sum_{t=\tau ^{F}+1}^{2\tau ^{F}}S^{D}(\theta ^{t-1}), $$
(18)

whereas strategy’s \(\varvec{ \theta }(\varvec{ \tau}^{F}{\bf -i},{\bf 0} )\) surplus obtains as

$$ {\mathcal{S}}(\varvec{ \theta }(\varvec{ \tau}^{F}{\bf -i},{\bf 0} ))=\sum_{t=1}^{\tau ^{F}-i}S^{H}(\theta _{i}^{t-1})+\sum_{t=\tau ^{F}-i}^{2\tau ^{F}-i}S^{D}(\theta _{i}^{t-1}), $$
(19)

where the subscript i at θ t−1 i indicates the history under \( \varvec{ \theta }(\varvec{ \tau}^{F}{\bf -i},{\bf 0})\) compared with \( \varvec{ \theta }(\varvec{ \tau}^{F},{\bf 0})\). Note that Eq. 18 contains i strictly positive elements more than Eq. 19 due to i additional payments and limited recall. The profit difference \(\Delta ={\mathcal{S}}(\varvec{ \theta }(\varvec{ \tau}^{F}, {\bf 0}))-{\mathcal{S}}(\varvec{ \theta }(\varvec{ \tau}^{F}{\bf -1,0}))\) using Eq. 18 and 19 amounts to

$$ \Delta =S^{H}(\theta ^{\tau ^{F}-1})+\sum_{t=\tau ^{F}+1}^{2\tau ^{F}}\left[ S^{D}(\theta ^{t-1})-S^{D}(\theta _{i}^{t-2})\right]. $$

Because of \({\frac{dS^{H}}{d\gamma _{t-1}}}>0\) and \({\frac{dS^{D}}{d\gamma _{t-1}}}>0\) for any history, payment in period τF under \({\mathcal{S}}( \varvec{ \theta }(\varvec{ \tau}^{F}{\bf,0}))\) provides for

$$ S^{D}(\theta ^{t-1})-S^{D}(\theta _{i}^{t-1})>0,\hbox { }t=\tau ^{F}+1,\ldots,2\tau ^{F}. $$
(20)

This proves \({\mathcal{S}}(\varvec{ \theta }(\varvec{ \tau}^{F},{\bf 0}))> {\mathcal{S}}(\varvec{ \theta }(\varvec{ \tau}^{F}{\bf -1},{\bf 0})).\) By iteration, it can be shown that \({\mathcal{S}}(\varvec{ \theta }(\varvec{ \tau} ^{F}{\bf -i},{\bf 0}))>{\mathcal{S}}(\varvec{ \theta }(\varvec{ \tau}^{F} {\bf -i-1},{\bf 0})),\,i=2,\ldots,\tau ^{F}-1.\) Thus, \({\mathcal{S}}( \varvec{ \theta }(\varvec{ \tau}^{F},{\bf 0}))>{\mathcal{S}}(\varvec{ \theta } (\varvec{ \tau}^{F}{\bf -i},{\bf 0}))\) for all i = 1, …, τF − 1.

Next, we show that strategy \(\varvec{ \theta }(\varvec{ \tau}^{F},\theta _{\tau ^{F}+1},\ldots,\theta _{T}),\) where the θ t ’s, t = τF + 1, …, T, are optimally chosen, dominates all other possible strategies. Assume that strategy \(\varvec{ \theta }(\varvec{ \tau}^{F}{\bf -i},{\bf 0})\) is changed by replacing a non-payment in period \(t=\widetilde{t}=\tau ^{F}-i+1{,}\ldots{,}(2\tau ^{F}-i)\) with a payment. Because \({\frac{d^{2}S^{H}}{ d\gamma _{t-1}^{2}}}>0\) and \({\frac{d^{2}S^{D}}{d\gamma _{t-1}^{2}}}>0\) —which only holds in case of limited recall so that the sample size or the denominator in the expected level of trust remains constant at τFmarginal gains (losses) from payment (non-payment) are increasing in previous payments (non-payments). Hence, if a non-payment is replaced by a payment it has to be in period \(\widetilde{t}=\tau ^{F}-i+1.\) Optimality of that replacement follows from the steps of the proof above implying \({\mathcal{S}}(\varvec{ \theta }(\varvec{ \tau}^{F}{\bf -i+1}, {\bf 0}))>{\mathcal{S}}(\varvec{ \theta }(\varvec{ \tau}^{F}{\bf -i}, {\bf 0})).\) Again, by iteration the optimality of additional replacements in periods \(\widetilde{t}=\tau ^{F}-i+2,\ldots,\tau ^{F}\) can be shown leading (again) to \(\varvec{ \theta }(\varvec{ \tau}^{F},{\bf 0})\succ \varvec{ \theta }(\varvec{ \tau}^{F}{\bf -i},{\bf 0})\) for all i = 1,…,τF − 1. Obviously, \(\varvec{ \theta }(\varvec{ \tau}^{F},\theta _{\tau ^{F}+1},\ldots,\theta _{T})\succeq \varvec{ \theta }(\varvec{ \tau}^{F},{\bf 0} )\) proving optimality of \(\varvec{ \theta }(\varvec{ \tau}^{F},\theta _{\tau ^{F}+1},\ldots,\theta _{T}).\)

(b) Assume a sequence of τF payments has been selected leading to full trust. Now compare the following sequences

$$ \varvec{ \theta }_{d=1} =(\theta _{\tau ^{F}+1}=0,1,1,\ldots,\theta _{2\tau ^{F}+2}=1) $$
(21)
$$ \varvec{ \theta }_{d=0} =(\theta _{\tau ^{F}+1}=1,1,1,\ldots,\theta _{2\tau ^{F}+2}=1), $$
(22)

starting in period τF + 1. Note that both sequences consist of (τF + 1) elements to ensure that full trust is (re)established at the end of the sequences. The crucial step in the proof is that the trust level given \(\varvec{ {\theta}}_{d=1}\) remains constant after decision in period \(\tau ^{F} + 1,\theta _{{\tau ^{F} + 1}} = 0\), because when moving on from period (τF + 1 + i) to (τF + 1 + i + 1), i ∈ {1, 2,…,τF}, the agent’s limited recall capability “deletes” the payment in period (i + 1) from memory while “storing” the payment from period (τF + 1 + i). As such the single non-payment in period τF + 1 reduces the trust level from full trust 1 under \(\varvec{ \theta }_{d=1}\) to \({\frac{\tau ^{F}-1}{ \tau ^{F}}}\). Surpluses associated with Eqs. 21 and 22 then obtain as

$$ \begin{aligned} {\mathcal{S}}(\varvec{ \theta }_{d=1}) =\,&{\frac{3}{2}}+\tau ^{F}\cdot S^{H}\left( {\frac{\tau ^{F}-1}{\tau ^{F}}}\right) \\ =\,&{\frac{3}{2}}+\tau ^{F}\cdot {\frac{\tau ^{F}-1}{2(\tau ^{F}+1)}} \\ {\mathcal{S}}(\varvec{ \theta }_{d=0}) =\,&(\tau ^{F}+1)\cdot {\frac{1}{2}}. \end{aligned} $$

Some algebra shows that \({\mathcal{S}}(\varvec{ \theta }_{d=1})>{\mathcal{S}}( \varvec{ \theta }_{d=0})\) holds for any τF.

(c) We know from part (a) and (b) that the principal’s optimal strategy exhibits payment from period one to τF and at least one non-payment thereafter. From lemma 2 it is known that switching back and forth between one payment and one non-payment will never be optimal. Hence, the first sequence that is played consists of τF payments followed by d ≥ 1 non-payments, \(\theta ^{R}(\tau ^{F},d)=\left( \theta _{1}=1,\theta _{2}=1,\ldots,\theta _{\tau ^{F}}=1,\theta _{\tau ^{F}+1}=0,\ldots,\theta _{\tau ^{F}+d}=0\right) \). After \(\theta ^{R}(\tau ^{F},d)\) has been played once, induced trust is again less than one. Applying the same arguments as in (a) and (b) it is again optimal to re-induce full trust by paying the bonus τF-times in a row and then harvesting it by d non-payments in a row. Hence, at the optimum the representative sequence θRF,d) will be repeated as long as possible given there are at least τF-periods remaining to harvest trust after it has been raised to its maximum. The periods after the last repetition of θRF,d) are subject to separate optimization.

As the results in (a), (b), and (c) are derived for r = 0 they also hold for a sufficiently low interest rate.\(\square\)

Proof of Proposition 5

The idea of the proof is to transform strategies into profit annuities (the optimal strategy will have the highest profit annuity). Optimal strategies are characterized by their representative sequence. Effects of the ex ante distribution of trust are eliminated after initial play of τF payments (which have already been proven optimal). Subsequent repetitions of the representative sequence will then be played with recurring levels of trust solely determined by τF and d. The first decision and its associated profit which is not influenced by ex ante trust is the first non-payment after τF payments. Therefore we rearrange the representative sequence such that d non-payments are followed by τF payments. With τF and d given, the profit annuity a(dF) based on the decision sequence θ(dF) obtains as

$$ a(d,\tau ^{F})=AF(d,\tau ^{F})\cdot \pi _{0}(d,\tau ^{F}), $$
(23)

where \(\pi _{0}=\left( \sum_{t=1}^{d}{\frac{S_{t}^{D}(\gamma _{t-1})}{ (1+r)^{t}}}+\sum_{t=d+1}^{\tau ^{F}+d}{\frac{S_{t}^{H}(\gamma _{t-1})}{ (1+r)^{t}}}\right) \) denotes the present value resulting from playing the sequence once, and \(AF(d,\tau ^{F})={\frac{(1+r)^{(\tau ^{F}+d)}\cdot r}{ (1+r)^{(\tau ^{F}+d)}-1}}\) is the annuity factor. (The dependence of a(dF) on r is suppressed for notational brevity. For the same reason, both d and τF are assumed to be continuous.)

Lemma 3

For a given τ F, the profit annuity a(d) has a unique maximizer d *.

Proof of Lemma 3

Note \({\frac{\partial }{\partial d}}\,AF(d,\tau ^{F})<0.\) Furthermore, the first term \(\sum_{t=1}^{d}{\frac{S_{t}^{D}(\gamma _{t-1})}{(1+r)^{t}}}\) of π 0(dF) is increasing in d: adding one non-payment adds \({\frac{ S_{d+1}^{D}(\gamma _{d})}{(1+r)^{d+1}}}\) to the sum while leaving previous summands unchanged; the second term \(\sum_{t=d+1}^{\tau ^{F}+d}{\frac{ S_{t}^{H}(\gamma _{t-1})}{(1+r)^{t}}}\) is decreasing in d: adding one non-payment decreases all S H t t−1) because induced trust in all periods decreases. Hence either

  1. (a)

    \({\frac{\partial }{\partial d}}\pi _{0}(d,\tau ^{F}) <0\hbox { } \forall d,\hbox { or}\)

  2. (b)

    \({\frac{\partial }{\partial d}}\pi _{0}(d,\tau ^{F}) >0\hbox { } \forall d,\hbox { or}\)

  3. (c)

    \({\frac{\partial }{\partial d}}\pi _{0}(d,\tau ^{F}) <0\hbox { } \forall d\in \lbrack 1,\overline{d}] \hbox { and }{\frac{\partial }{\partial d}}\pi _{0}(d,\tau ^{F}) >0\hbox { if }d\in ( \overline{d},\tau ^{F}-1],\hbox { or}\)

  4. (d)

    \({\frac{\partial }{\partial d}}\pi _{0}(d,\tau ^{F}) > 0\hbox { } \forall d\in \lbrack 1,\widetilde{d}] \hbox { and }{\frac{\partial }{\partial d}}\pi _{0}(d,\tau ^{F}) < 0\hbox { if }d\in ( \widetilde{d},\tau ^{F}-1]\)

must hold. The proof of proposition 4 shows a(1, τF) > a(0, τF) for all τF. Hence, either (b) or (d) holds. It follows, either (i) \({\frac{\partial }{\partial d}}a(d,\tau ^{F})>0\) for \( \forall d\leq \left( \tau ^{F}-1\right) \) holds, or (ii) \({\frac{\partial }{\partial d}}a(d,\tau ^{F})>0\) for \(\forall d\in \lbrack 1,d^{\ast }],d^{\ast }\leq \widetilde{d},\) and \({\frac{\partial }{\partial d}}a(d,\tau ^{F})<0\) if \( d\in (d^{\ast },\tau ^{F}-1]\) must hold. Therefore \(d^{\ast }\leq \left( \tau ^{F}-1\right) \) is the unique maximizer of a(d, τF). \(\square\)

Now assume \(d^{\ast }=(\tau ^{F}-1)\) is the unique maximizer of a(d, τF). Then it is obvious that d increases in τF. Assume contrary to it \(d^{\ast }<\left( \tau ^{F}-1\right) \) to be optimal; hence \( \left( d^{\ast }+1\right) \) is not optimal given τF. The condition for optimality of (d * + 1) using Eq. 23 obtains as

$$ \begin{aligned} a(d^{\ast }+1,\tau ^{F}) >&a(d^{\ast },\tau ^{F})\\ {\frac{AF(d^{\ast }+1,\tau ^{F})}{AF(d^{\ast },\tau ^{F})}}\cdot \pi _{0}(d^{\ast }+1,\tau ^{F}) >&\pi _{0}(d^{\ast },\tau ^{F}) \end{aligned} $$
(24)

Observe, \(\lim_{\tau ^{F}\rightarrow \infty }{\frac{AF(d^{\ast }+1,\tau ^{F}) }{AF(d^{\ast },\tau ^{F})}}=1.\) Therefore the left-hand side of (24) converges to \(\lim_{\tau ^{F}\rightarrow \infty }\pi _{0}(d^{\ast }+1,\tau ^{F})=\pi _{0}(d^{\ast },\tau ^{F})+{\frac{ S_{d^{\ast }+1}^{D}(\gamma _{d^{\ast }})-S_{d^{\ast }+1}^{H}(\gamma _{d^{\ast }})}{(1+r)^{d^{\ast }+1}}}\) and the relation in (24) will hold if τF increases sufficiently.\(\square\)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lukas, C., Schöndube, J.R. Trust and adaptive learning in implicit contracts. Rev Manag Sci 6, 1–32 (2012). https://doi.org/10.1007/s11846-010-0045-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11846-010-0045-2

Keywords

JEL Classification

Navigation