Skip to main content

Artificial Intelligence and Law Enforcement

  • Chapter
  • First Online:
Regulating Artificial Intelligence

Abstract

Artificial intelligence is increasingly able to autonomously detect suspicious activities (‘smart’ law enforcement). In certain domains, technology already fulfills the task of detecting suspicious activities better than human police officers ever could. In such areas, i.e. if and where smart law enforcement technologies actually work well enough, legislators and law enforcement agencies should consider their use. Unfortunately, the German Constitutional Court, the European Court of Justice, and the US Supreme Court are all struggling to develop convincing and clear-cut guidelines to direct these legislative and administrative considerations. This article attempts to offer such guidance: First, lawmakers need to implement regulatory provisions in order to maintain human accountability if AI-based law enforcement technologies are to be used. Secondly, AI law enforcement should be used, if and where possible, to overcome discriminatory traits in human policing that have plagued some jurisdictions for decades. Finally, given that smart law enforcement promises an ever more effective and even ubiquitous enforcement of the law—a ‘perfect’ rule of law, in that sense—it invites us as democratic societies to decide if, where, and when we might wish to preserve the freedom to disobey the rule(s) of law.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In doing so, this Chapter also complements the more specific analyses provided by Buchholtz, Schemmel, paras 32–46, and Braun Binder, paras 16 et seq., on legal tech, financial market regulation, and tax law enforcement, respectively.

  2. 2.

    For the general definition of artificial intelligence informing this Book cf. Rademacher and Wischmeyer, paras 5–6.

  3. 3.

    Or some other incident requiring state interference, e.g. an attempted suicide or the identification of a person for whom an arrest warrant is issued.

  4. 4.

    That excludes more ‘traditional’ big data technologies such as breathalyzers, field testing kits, or DNA analyses. This definition is similar to the one proposed by Rich (2016), pp. 891–892, who correctly notes that even though ‘[t]hese traditional technologies can be exceptionally helpful to police in establishing the “historical facts” of what happened’, they cannot analyze on their own ‘groups of disparate facts together and [draw] conclusions about the probability of an individual’ non-compliance. It is the specific feature of smart law enforcement technologies—called ‘automated suspicion algorithms’ by Rich—that they attempt to apply patterns that are woven with such density that a ‘match’ qualifies, per se, as indicative of suspicious activity (cf. para 29). Furthermore, the definition also excludes so-called ‘impossibility structures’, i.e. technology which not only detects suspicious activity, but at a more sophisticated level aims at making illegal conduct physically impossible (cf. Rich 2013, pp. 802–804; see also, with a different terminology, Cheng 2006, p. 664: ‘Type II structural controls’, Mulligan 2008, p. 3: ‘perfect prevention’, and Rosenthal 2011, p. 579: ‘digital preemption’).

  5. 5.

    Cf., for a definition, in this Book Rademacher and Wischmeyer, and the thorough explanation offered by Rich (2016), pp. 880–886, esp. 883.

  6. 6.

    See, e.g., Big Brother Watch (2018), pp. 25–33; Chaos Computer Club (2018).

  7. 7.

    Compare to the technical limitations still described by Candamo et al. (2010), esp. p. 215. For video summarization technologies cf. Thomas et al. (2017).

  8. 8.

    Ferguson (2017), pp. 88–90.

  9. 9.

    Big Brother Watch (2018), pp. 25–30; on the current legal framework governing the use of CCTV and comparable surveillance technologies in the UK cf. McKay (2015), paras 5.181 et seq.

  10. 10.

    The historical sensitivity, of course, translates into a rather restrictive interpretation of constitutional rules when it comes to video surveillance, cf. Wysk (2018), passim; Bier and Spiecker gen Döhmann (2012), pp. 616 et seq., and infra, paras 17–18.

  11. 11.

    Bundesministerium des Innern (2018). But cf. Chaos Computer Club (2018), rebutting the Ministry’s optimistic evaluation. Similar RWI (2018).

  12. 12.

    Bundespolizeipräsidium (2018), p. 35.

  13. 13.

    The EU-funded iBorderCtrl-project (www.iborderctrl.eu) is testing software that is to detect persons lying at border controls; see, for a first assessment, Algorithm Watch (2019), pp. 36–37.

  14. 14.

    E.g. Bouachir et al. (2018): video surveillance for real-time detection of suicide attempts.

  15. 15.

    Davenport (2016); Joh (2014), pp. 48–50; for further examples see Capers (2017), pp. 1271–1273. Comparable systems are being tested in Germany, too, cf. Djeffal, para 9, and Wendt (2018).

  16. 16.

    Ferguson (2017), p. 86.

  17. 17.

    In the final version of what is now Directive (EU) 2019/790 the explicit reference to content recognition technologies has been removed (cf. Article 17 of the Directive). If that effectively avoids a de facto obligation on information service providers to apply such filter technologies remains to be seen. For an early discussion of technological means of ‘automatic enforcement’ see Reidenberg (1998), pp. 559–560. See also Krönke, para 44, who expects a ‘de facto obligation’ of service providers to apply recognition software.

  18. 18.

    Ferguson (2017), pp. 114–118. For the Israeli intelligence agencies’ reportedly extensive and successful use of social media monitoring in detecting terrorists cf. Associated Press (2018); on terrorism in general see also Pelzer (2018).

  19. 19.

    Including well-established techniques such as fingerprint and DNA analysis. Cf. for a rather critical overview of ‘new’ technologies Murphy (2007), pp. 726–744; on more modern projects see Ferguson (2017), pp. 116–118.

  20. 20.

    Ferguson (2017), p. 118.

  21. 21.

    Rich (2016), p. 872; for an up-to-date account on that technology see Schemmel, para 32 and Throckmorton (2015), pp. 86–87; on data mining aimed at predicting tax avoidance see Lismont et al. (2018) and Braun Binder and with an example from Australia see Djeffal, para 11.

  22. 22.

    E.g. the private Polaris Project, which analyzes telephone calls for help in cases of human trafficking, to reveal places, routes, and even financial cash flows worthy of police attention.

  23. 23.

    See Eubanks (2018), for a critical report on software predicting child abuse tested in Allegheny County; see Spice (2015), reporting on a DARPA funded software to detect sex trafficking by screening online advertisements.

  24. 24.

    To some extent that includes misconduct within police forces as well, cf. Ferguson (2017), pp. 143–162.

  25. 25.

    Cf. Ferguson (2017), pp. 63–69. Commercial applications used in the US include HunchLab, Risk Terrain Modelling (RTM, cf. Caplan and Kennedy 2016, and Ferguson 2017, pp. 67–68), and PredPol.

  26. 26.

    For an up-to-date overview of place-based predictive policing in Germany cf. Seidensticker et al. (2018) and, for a criminological evaluation, Singelnstein (2018), pp. 3–5; for a comprehensive approach, which includes other forms of big data policing, cf. Rademacher (2017), pp. 368–372.

  27. 27.

    For the constitutional constraints German police has to respect see para 18. A second reason for the German reluctance might also be that big data driven proactive ‘rasterizing’ proved spectacularly inefficient when German police applied it in the aftermath of 9/11, cf. German Constitutional Court 1 BvR 518/02 ‘Rasterfahndung’ (4 April 2006), BVerfGE 115, pp. 327–331.

  28. 28.

    Cf. Burkert (2012), p. 101: ‘aimed at erecting Chinese walls within the executive’. This conception replaced the former idea of informational ‘Einheit der Verwaltung’ (executive unity), which prevailed, approximately, until the 1970s, cf. Oldiges (1987), pp. 742–743.

  29. 29.

    Rather negatively evaluated by Saunders et al. (2016), but reported to have improved significantly since 2015, cf. Ferguson (2017), p. 40.

  30. 30.

    Brühl (2018). On the federal ‘Polizei 2020’ program aimed at establishing an integrated database for all German police agencies, cf. Bundesministerium des Innern (2016).

  31. 31.

    Brühl (2018). The legal basis for this data mining is presumably Section 25a(1), (2) Hessisches Gesetz über die öffentliche Sicherheit und Ordnung (HSOG), in force since 4 July 2018.

  32. 32.

    Section 10 Geldwäschegesetz; cf. for a case study on anti-money laundering technology Demetis (2018); see also Schemmel, para 12.

  33. 33.

    Directive (EU) 2016/681 of 27 April 2016 on the use of passenger name record (PNR) data for the prevention, detection, investigation and prosecution of terrorist offences and serious crime. For an evaluation against the backdrop of EU and German constitutional law cf. Rademacher (2017), pp. 410–415.

  34. 34.

    For a status quo report on German PNR-analyses cf. Bundestag (2018).

  35. 35.

    See, for a rare example, Ferguson (2017), p. 88.

  36. 36.

    Schlossberg (2015).

  37. 37.

    For an early instance of aural surveillance cf. Zetter (2012).

  38. 38.

    Saracco (2017); see also May (2018), on AI designed to detect doping athletes.

  39. 39.

    Cf. Rademacher (2017), pp. 373–393; Rich (2016), pp. 880–886, 895–901.

  40. 40.

    Joh (2019), p. 179.

  41. 41.

    Henderson (2016), p. 935: ‘[W]hen it comes to criminal investigation, time travel seems increasingly possible.’

  42. 42.

    If police tried ‘to see into the past’ (Ferguson 2017, p. 98) before the rise of big data policing, they usually had to rely on human eye witnesses—who are notoriously unreliable.

  43. 43.

    See Barret (2016), reporting on technology that could connect up to 30 million CCTV cameras in the US.

  44. 44.

    Cf. German Constitutional Court 1 BvR 370/07 ‘Onlinedurchsuchung’ (27 February 2008), BVerfGE 120, pp. 344–346, concerning intelligence agents performing website searches.

  45. 45.

    Article 13 Grundgesetz (German Basic Law).

  46. 46.

    Fundamental right to confidentiality and integrity of IT systems, developed by the Constitutional Court in ‘Onlinedurchsuchung’ (see note 44), pp. 302–315. On that right cf. Heinemann (2015), pp. 147–171; Hoffmann-Riem (2008), pp. 1015–1021; Böckenförde (2008).

  47. 47.

    Constitutional Court ‘Onlinedurchsuchung’ (see note 44), p. 345.

  48. 48.

    E.g. German Constitutional Court 1 BvR 2074/05 ‘KFZ-Kennzeichenkontrollen’ (11 March 2008), BVerfGE 120, p. 431; cf. Rademacher (2017), pp. 403–405, 406–407.

  49. 49.

    See, e.g., German Constitutional Court ‘KFZ-Kennzeichenkontrollen’ (see note 48), p. 430 with p. 399: Although an ALPR-scan was said to not count as an interference with A’s right to informational self-determination, if A’s license plate number does not produce a match in the database and the image was therefore deleted immediately, A’s potential privacy fear (‘chilling effect’ due to a feeling of constant observation’) still should render the interference with B’s right to informational self-determination—whose license plate number had produced a ‘hit’—unconstitutional. Just after the Federal Administrative Court confirmed that irritating jurisprudence, the Constitutional Court reversed it, now holding that any form of video surveillance amounts to an interference with the right to informational self-determination, see Constitutional Court 1 BvR 142/15 ‘KFZ-Kennzeichenkontrollen 2’ (18 December 2018), paras 45. Cf. Marsch (2012), pp. 605–616.

  50. 50.

    German Constitutional Court 1 BvR 209/83 ‘Volkszählung’ (15 December 1983), BVerfGE 65, p. 42: ‘[…] right of individuals to decide in principle themselves when and within what limits personal matters are disclosed’. Author’s translation.

  51. 51.

    Cf. Constitutional Court ‘Volkszählung’ (see note 50), p. 45: ‘in that respect, “unimportant” data no longer exist in the context of automated data processing’. Author’s translation.

  52. 52.

    The Sphärentheorie offered, in principle, heightened protection from surveillance for information that could be categorized as intimate or private, and did not protect, again in principle, information that was considered social or public.

  53. 53.

    Cf. Ferguson (2014), pp. 1313–1316.

  54. 54.

    Cf. Poscher (2017), pp. 131–134; Marsch (2018), pp. 116–124.

  55. 55.

    Cf. Marsch, paras 15–16. Ditto Poscher (2017), p. 132.

  56. 56.

    Constitutional Court ‘KFZ-Kennzeichenkontrollen’ (see note 48), and ‘KFZ-Kennzeichenkontrollen 2’ (see note 49).

  57. 57.

    Constitutional Court ‘Rasterfahndung’ (see note 27).

  58. 58.

    Cf. Staben (2016), pp. 160, 162 for a broad overview, listing most, if not all, the criteria the Constitutional Court so far has weighed against modern surveillance technologies.

  59. 59.

    See, esp., the decision on dragnet investigations, Constitutional Court ‘Rasterfahndung’ (see note 27), pp. 354, 356–357.

  60. 60.

    Constitutional Court ‘KFZ-Kennzeichenkontrollen’ (see note 48), pp. 401, 407.

  61. 61.

    Constitutional Court ‘Volkszählung’ (see note 50), p. 43; Constitutional Court ‘KFZ-Kennzeichenkontrollen’ (see note 48), pp. 402, 430; Constitutional Court ‘KFZ-Kennzeichenkontrollen 2’ (see note 49), paras 51, 98.

  62. 62.

    The Constitutional Court’s recurrence to this line of argument (‘risks of abuse’, see e.g. Constitutional Court ‘KFZ-Kennzeichenkontrollen’ (see note 48), p. 402) has been harshly criticized by German legal scholarship for being empirically unfounded, cf. Trute (2009), pp. 100–101.

  63. 63.

    See also Hermstrüwer, paras 10, 19.

  64. 64.

    Whether that being an imminent risk of harm to an individual’s health, life, or liberty, or whether police experience regarding a specific place (crime hot spots) or activity suffices (see for examples of the latter Constitutional Court ‘KFZ-Kennzeichenkontrollen 2’ (see note 49), para 94, and Rademacher 2017, pp. 401–410), depends on the ‘invasiveness’ of the respective means of surveillance. Obviously, that again requires a proportionality test, the outcome of which is hard to predict. In specific areas, such as tax law (see Braun Binder) and financial regulation (see Schemmel, paras 10–12 and para 32 for US law), state databases are ‘rasterized’ routinely, thus implementing a limited form of generalized suspicion (Generalverdacht) or so-called ‘anlasslose Kontrolle’ (cf. Constitutional Court ‘KFZ-Kennzeichenkontrollen 2’ (see note 49), para 94).

  65. 65.

    Constitutional Court ‘KFZ-Kennzeichenkontrollen’ (see note 48), p. 378, recital 4.

  66. 66.

    Rademacher (2017), pp. 401–403.

  67. 67.

    Marsch (2018), pp. 17–30. Cf. for an up-to-date account of the ECtHR’s jurisprudence on surveillance technology Bachmeier (2018), pp. 178–181.

  68. 68.

    Most recently confirmed in CJEU Case C-207/16 ‘Ministerio Fiscal’ (2 October 2018) para 51. The respective jurisprudence is mainly based on Article 8 CFR [Right to data protection]. For a detailed analysis of that provision and the shift in the CJEU’s jurisprudence towards an understanding similar to the German right to informational self-determination cf. Marsch, paras 29–32.

  69. 69.

    See, above all, Regulation (EU) 2016/679 of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation); Directive (EU) 2016/680 of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purpose of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data; and Directive (EG) 2002/58 of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (ePrivacy Directive). Cf. Dimitrova (2018).

  70. 70.

    Cf. CJEU Case C-203/15 ‘Tele2 Sverige’ (21 December 2016) para 111: ‘[N]ational legislation [requiring private companies to store communications data] must be based on objective evidence which makes it possible to identify a public whose data is likely to reveal a link, at least an indirect one, with serious criminal offences […].’ Confirmed by CJEU Opinion No. 1/15 ‘Passenger Name Record’ (26 July 2017), para 191. But cf. the referrals submitted under Article 267 TFEU by the Investigatory Powers Tribunal London (C-623/17), the French Conseil d’État (C-512/18), and the Belgian Constitutional Court (C-520/18), rather critically scrutinizing the CJEU’s data protection-friendly approach. See also ECtHR Case-No. 35252/08 ‘Big Brother Watch v. United Kingdom’ (13 September 2018), para 112.

  71. 71.

    Take, for instance, ‘intelligent’ video surveillance that is supposed to alert to pickpocketing at train stations. Certainly, it will need to indiscriminately record at least some minutes of what happens on the platform to distinguish suspicious behavior from people just strolling around waiting for their trains.

  72. 72.

    Additionally, the data will be needed to train new algorithms and evaluate algorithms which are already applied, cf. CJEU ‘Passenger Name Record’ (see note 70), para 198.

  73. 73.

    ‘[N]ot […] limited to what is strictly necessary’, CJEU ‘Passenger Name Record’ (see note 70), para 206.

  74. 74.

    CJEU ‘Passenger Name Record’ (see note 70), paras 204–209.

  75. 75.

    Meaning that the ‘models and criteria’ applied by Canada must be ‘specific and reliable, making it possible […] to arrive at results targeting individuals who might be under a “reasonable suspicion” of participation in terrorist offences or serious transnational crime’, cf. CJEU ‘Passenger Name Record’ (see note 70), para 172.

  76. 76.

    See CJEU ‘Passenger Name Record’ (see note 70): ‘The transfer of that data to Canada is to take place regardless of whether there is any objective evidence permitting the inference that the passengers are liable to present a risk to public security in Canada.’ [para 186] ‘[T]hat processing is intended to identify the risk to public security that persons, who are not, at that stage, known to the competent services, may potentially present, and who may, on account of that risk, be subject to further examination. In that respect, the automated processing of that data, before the arrival of the passengers in Canada, facilitates and expedites security checks, in particular at borders.’ [para 187] According to para 191 et seq., that suffices to ‘establish a connection between the personal data to be retained and the objective pursued’. Cf. Rademacher (2017), pp. 412–413.

  77. 77.

    That is especially true as the CJEU’s reasoning in its ‘Passenger Name Record’ decision (see note 70) is premised on public international law concerning air traffic and respective border controls, cf. para 188. However, public international law is only one line of argument in favor of the agreement the Court has accepted, cf. para 187.

  78. 78.

    Due to the interference with the fundamental right to informational self-determination, in any case a specific legal basis is required, see Article 52(1) CFR.

  79. 79.

    See also CJEU ‘Ministerio Fiscal’ (see note 68), paras 54, 56–57.

  80. 80.

    Ferguson (2017), p. 98; ditto Wittmann (2014), pp. 368–369. See also Joh (2016), p. 17: ‘Unlike arrests or wiretaps, the decision to focus police attention on a particular person, without more, is unlikely to be considered a Fourth Amendment event.’

  81. 81.

    Cf. Ferguson (2014), p. 1333: ‘In a government truly of limited powers, police would not have the surveillance powers to invade privacy or security unless there was a law specifically allowing it. Such is not the current reality under the Fourth Amendment.’ But cf. Ferguson (2017), p. 116, suggesting that social media monitoring and respective storage of data could interfere with the First Amendment as well.

  82. 82.

    Katz v. United States, 389 U.S. 347 (1967), p. 361 (Harlan, J., concurring).

  83. 83.

    ‘Public’ is defined quite broadly, encompassing any communication that is directed to third parties, cf. Smith v. Maryland, 442 U.S. 735 (1979); for an in-depth analysis of the Court’s case law see Wittmann (2014), pp. 146–328; arguably, Carpenter v. United States has destabilized the third party doctrine, too (cf. note 99).

  84. 84.

    Joh (2016), p. 18, notes that ‘[s]urprisingly, there is little discussion of these decisions that the police make about individuals before any search, detention, or arrest takes place. Rather, current unresolved issues of police technology have focused on whether a particular use is a Fourth Amendment search requiring a warrant and probable cause.’

  85. 85.

    Rich (2016), pp. 895–901; Ferguson (2015), pp. 388–409; Joh (2014), pp. 55–65, but see pp. 66–67: ‘Beyond the Fourth Amendment’.

  86. 86.

    Cf. Kyllo v. United States, 533 U.S. 27 (2001), pp. 34–35.

  87. 87.

    That does not mean that legislators could not step in and implement more restrictive requirements for surveillance that fall short of constituting an interference under the Fourth Amendment. According to Ferguson (2017), p. 101, however, such legislative restrictions or clarifications are, to date, missing. The new California Consumer Privacy Act (CCPA), which has received great attention in the EU, too, limits its scope of application to data processing by private entities (cf. Cal. Civ. Code § 1798.140(c)).

  88. 88.

    Cf. Ferguson (2014), p. 1305, identifying seven ‘values’ discussed as underlying the Fourth Amendment case law.

  89. 89.

    Ferguson (2014), p. 1307.

  90. 90.

    United States v. Jones, 565 U.S. 400 (2012), p. 404.

  91. 91.

    Cf. Ferguson (2014), p. 1308.

  92. 92.

    United States v. Jones (see note 90), p. 418 (Alito, J., concurring). But see also Justice Sotomayor’s opinion, ibid, at p. 414, consenting to the majority that ‘the Government’s physical intrusion on Jones’ Jeep supplies a […] basis for decision’ in any case.

  93. 93.

    A similarly historical approach applies to surveillance technology that is able to ‘explore details of the home that would previously have been unknowable without physical intrusion’ (Kyllo v. United States (see note 86), p. 40—such surveillance does constitute a search within the meaning of the Fourth Amendment; see also Florida v. Jardines, 133 S.Ct. 1409 (2013), p. 1419 (Kagan, J., concurring)).

  94. 94.

    United States v. Jones (see note 90), p. 430 (Alito, J., concurring).

  95. 95.

    Ibid.

  96. 96.

    Ibid, pp. 429–430 (Alito, J., concurring).

  97. 97.

    Ibid, pp. 416–417 (Sotomayor, J., concurring). See also paras. 18 and 42. Cf. Staben (2016), pp. 67–68: the argument of chilling effects does appear in the Supreme Court’s jurisprudence, but usually regarding the First Amendment.

  98. 98.

    Thus, at least under traditional interpretation of the Fourth Amendment, constituting public disclosure; see also notes 83 and 99.

  99. 99.

    The crime in question in Carpenter was robbery. Interestingly, Justice Alito dissented, arguing that the majority’s decision would be ‘revolutionary’ inasmuch as it ignored established case law according to which the Fourth Amendment would not apply to ‘an order merely requiring a [third] party to look through its own records and produce specific documents’ (Carpenter v. United States, 585 U.S. ____ (2018), p. 12 (Roberts, C. J., for the majority).

  100. 100.

    Ibid, p. 18 (Roberts, C. J., for the majority).

  101. 101.

    Obviously, the degree of precision and accuracy that is required will vary depending on the intrusiveness of the surveillance measure itself, the availability of less intrusive means, and the severity of the crime or threat in question.

  102. 102.

    It is a mistake to conclude that the application of machine learned patterns could not result in individualized predictions (cf. Ferguson 2017, p. 127: ‘generalized suspicion’). As soon as data concerning a specific individual is used as input data for the prediction, the result is individualized by definition. The question that really matters is, whether it is individualized enough, i.e. whether the pattern in question relies on more than just one or two predictors such as place of birth, education etc. (cf. Hermstrüwer, para 10, and also paras 30–34 on the converse risk of excessive individualization (‘overfitting’)). One should bear in mind that all police suspicion, be it detected by human or technological means, starts with and relies on some form of pattern recognition, i.e. the application of previously learned information to new situations. For details cf. Rademacher (2017), pp. 373–377, 381–383, and Harcourt and Meares (2011), p. 813: ‘In reality, most individuals arouse suspicion because of the group-based-type behavior that they exhibit or the fact that they belong to readily identifiable groups—sex and age are two examples—rather than because of unique individual traits. Typically, individuals come to police attention because they are young, or are male, or are running away from the police, or have a bulge in their pocket’.

  103. 103.

    See, for further details on the methods of evaluating predictive algorithms and on the difference between precision and accuracy, Degeling and Berendt (2017), esp. 3.2.

  104. 104.

    All forms of suspicion are probabilistic in nature, be it human or technological. By definition, reliance on ‘suspicion’ accepts that any actions based thereon are made in a state of possible incompleteness of information (cf. Rich 2016, p. 898; Rademacher 2017, p. 383) and should—consequently—be open to ex post rectification.

  105. 105.

    Interestingly, American scholars suggest comparing smart law enforcement to drug dogs rather than to humans, e.g. Rich (2016), pp. 913–921. On the law of drug dogs see esp. Rodriguez v. United States, 575 U.S. __ (2015), pp. 5–6 (Ginsburg, J., for the majority), finding that police may perform investigations (like dog sniffs) unrelated to a roadside detention (which itself requires ‘probable cause’ under the Fourth Amendment), but only if that investigation does not prolong the stop. In Florida v. Jardines (see note 93), the Supreme Court held, in a 5 to 4 decision, that a dog sniff does amount to a search within the meaning of the Fourth Amendment when it is performed on property surrounding the home of a person (so-called curtilage, in that specific case: a front porch), if that property had been entered with the intention of performing that investigation. On the other hand, Justice Scalia reaffirmed that ‘law enforcement officers need not “shield their eyes” when passing by the home “on public thoroughfares”’ (at p. 1423).

  106. 106.

    Ferguson (2017), p. 198: ‘“Here is how we test it” may be a more comforting and enlightening answer than “here is how it works.”’ For a detailed analysis of up-to-date testing mechanisms cf. Kroll et al. (2017), pp. 643–656.

  107. 107.

    Cf. Rich (2016), pp. 913–921, premised on the comparability of smart law enforcement (‘automated suspicion algorithms’) with drug dogs.

  108. 108.

    Cf. Hermstrüwer, paras 52–55, who correctly notes that acceptability of false positives or false negatives depends on whether AI is applied for information gathering, or for preventive or punitive purposes.

  109. 109.

    Cf. Kroll et al. (2017), pp. 695–705, for detailed ‘recommendations’ to lawmakers, policymakers, and computer scientists to ‘foster’ interdisciplinary collaboration.

  110. 110.

    Ibid, pp. 657–658.

  111. 111.

    See, e.g., Bieker et al. (2018), p. 610, referring to Article 13 of the EU’s General Data Protection Regulation (see para 69).

  112. 112.

    Ditto see Hermstrüwer, paras 3, 45–47, and Kroll et al. (2017), p. 657: a ‘naïve solution to the problem’; Ferguson (2017), pp. 137, 138: ‘The issue […] is not the transparency of the algorithm […] but the transparency of how the program is explained to the public and, of course, what is done with the information’. See also Wischmeyer, passim, and esp. paras 24 et seq. and 30.

  113. 113.

    Cf. Joh (2014), pp. 50–55.

  114. 114.

    Cf. Rich (2016), p. 919.

  115. 115.

    Hildebrandt (2016), pp. 3, 21–22; see also Marks et al. (2017), pp. 714–715: ‘automatic criminal justice’.

  116. 116.

    On the need to (re)establish human agency cf. Wischmeyer, paras 24 et seq.

  117. 117.

    See also Kroll et al. (2017), pp. 657–660, Ferguson (2017), pp. 137–138, and, for an up-to-date overview of the accountablity discussion, Andrews (2019). This is not to say that specific public officials should not have the right to scrutinize source codes, training and test data etc. if circumstances, especially procedures of judicial review require that kind of additional transparency. See for details Wischmeyer, esp. para 47.

  118. 118.

    For techniques to preserve privacy in the course of human re-evaluation of video data cf. Birnstill et al. (2015).

  119. 119.

    Ditto Rich (2016), p. 920, who, however, appears to be skeptical as to the practicality of such systems of disclosure: ‘theoretically solvable’. See also Wischmeyer, esp. para 27. The respective techniques are called explainable AI (short form: XAI), cf. Waltl and Vogl (2018) and Samek et al. (2017).

  120. 120.

    Eventually also including ‘counterintuitive’ insights, cf. Ferguson (2017), pp. 117, 136–140; for a critical account under EU law cf. Rademacher (2017), pp. 388–391.

  121. 121.

    It is important to note that in this case it is irrelevant that the software itself is limited to detecting correlations and is not able to ‘understand’ casual links. To Ferguson (2017), p. 119, the difference between correlation and causation is one of the ‘fundamental questions’ behind big data policing. I disagree: The lack of understanding, which is inherent in machine learning, would only constitute a case against the use of smart law enforcement technologies, if we were to require the software to be held accountable, i.e. require it to explain itself and be subject, eventually, to disciplinary or electoral sanctions. Instead, what we need, is to establish a regulatory framework that preserves human accountability. Therefore, the software itself does not need to ‘understand’ the correlations it searches for. See also, from a private law perspective, Eidenmüller (2017), p. 13: ‘Treating robots like humans would dehumanize humans, and therefore we should refrain from adopting this policy.’

  122. 122.

    Cf. Rich (2016), pp. 911–924 for a detailed analysis of the law on drug dogs (in the US) and its suitability for being applied, by way of analogy, to ‘automated suspicion algorithms’; see also note 105.

  123. 123.

    Ferguson (2017), p. 133.

  124. 124.

    Cf. Tischbirek, paras 5 et seq.

  125. 125.

    See Buchholtz, para 30; Hacker (2018), pp. 1143–1144; but see also Brantingham et al. (2018), p. 1: ‘We find that there were no significant differences […] by racial-ethnic group between the control and treatment conditions.’ For a detailed account of comparable software being tested in the criminal justice system of the UK, cf. Scantamburlo et al. (2019), esp. pp. 58 et seq. 

  126. 126.

    See Hermstrüwer, paras 3–4; see also Bennet Capers (2017), pp. 1242, 1271: ‘I am a black man. […] I am interested in technology that will lay bare not only the truth of how we police now but also how those of us who are black or brown live now.’

  127. 127.

    The example is taken from Bennet Capers (2017), p. 1242, who uses this equation to describe his perception of the status quo of human law enforcement in the US.

  128. 128.

    Ditto Hacker (2018), pp. 1146–1150.

  129. 129.

    Cf. Kroll et al. (2017), p. 685; see Tischbirek, para 13. For an up-to-date catalogue of sensitive predictors under EU law see Article 21 CFR.

  130. 130.

    Ferguson (2017), pp. 122–124; Kroll (2017), p. 685; see Tischbirek, paras 11–12.

  131. 131.

    Current research is summed up by Kroll et al. (2017), pp. 682–692. For an account of legal tools to reveal discriminatory algorithms see Hacker (2018), pp. 1170–1183.

  132. 132.

    Kroll et al. (2017), p. 674 (procedural fairness), pp. 690–692 (nondiscrimination). See also Hermstrüwer, paras 41–43.

  133. 133.

    The approach has therefore been labelled ‘fairness through awareness’, cf. Dwork et al. (2011). See also Tischbirek, paras 31 et seq.: ‘towards a paradigm of knowledge creation’.

  134. 134.

    Ferguson (2017), p. 137.

  135. 135.

    See, e.g., Bennet Capers (2017), pp. 1268–1283, 1285, with a strong plea for the replacement of (biased) human policing by (hopefully) less biased policing by technology.

  136. 136.

    Cf. Tyler (1990), pp. 3–4; Hoffmann-Riem (2017), pp. 33–34.

  137. 137.

    See, e.g., Cheng (2006), pp. 659 et seq.

  138. 138.

    See note 4 and Rich (2013), pp. 802–804; for more recent reflections on impossibility structures or ‘embedded law’ cf. Rademacher (2019) and Becker (2019), respectively.

  139. 139.

    See, for a plea for preventive regulation of financial markets, Schemmel, para 46.

  140. 140.

    Cf. Bennet-Capers (2017), pp. 1282–1283, 1285–1291.

  141. 141.

    Orwell (1949).

  142. 142.

    Cf. Oganesian and Heermann (2018).

  143. 143.

    For a more sophisticated attempt to explain ‘The Dangers of Surveillance’ cf. Richards (2013), esp. pp. 1950–1958, 1962–1964. See Timan et al. (2018), pp. 744–748, for an interdisciplinary attempt to reconcile the insights of ‘surveillance studies’ with legal reasoning, quite rightly asking for ‘more legal scholarship’ that ‘views surveillance as generally good and bad at the same time, or as good or bad depending on the situation’.

  144. 144.

    Solove (2007), pp. 758, 765; Richards (2013), p. 1961; for a thorough analysis from the German perspective see Staben (2016) and Oermann and Staben (2013).

  145. 145.

    Joh (2019), p. 178: ‘[A]s cities become “smarter”, they increasingly embed policing itself into the urban infrastructure.’

  146. 146.

    See also Timan et al. (2018), p. 738: ‘increasing blend of governmental and corporate surveillance infrastructures’ and ‘an increase of citizen-instigated forms of surveillance can be witnessed’.

  147. 147.

    Cf. Rich (2013), p. 810. This sentiment might actually deserve some form of legal, perhaps even constitutional recognition. The reason for that is that we live in what I would call ‘imperfect democracies’. I.e. in societies that try very hard to balance out majority rule on the one hand and the individual’s rights to self-determination and political participation on the other hand—by providing a plethora of fundamental and political rights in order to protect minorities—but which fail, and will continue to fail in the future, to fully provide such balance for many practical reasons. So as long as even democratic laws cannot claim to be perfectly legitimate with regard to each and every paragraph, there is good reason to argue that such laws on their part may not claim perfect compliance.

  148. 148.

    Petroski (2018).

  149. 149.

    Cheng (2006), pp. 682–688, with a critical account of legislation respecting that wish.

  150. 150.

    See also Hartzog et al. (2015), esp. pp. 1778–1792, advocating for automated law enforcement to be consciously ‘inefficient’ to prevent ‘perfect enforcement’.

  151. 151.

    Mulligan (2008), p. 3.

  152. 152.

    See also Timan et al. (2018), p. 747, citing J Cohen: ‘importance of room for play’.

  153. 153.

    United States v. Jones (see note 90), p. 429; see also note 55 for the German discussion.

  154. 154.

    See also Rich (2013), pp. 804–828; Hartzog et al. (2015), pp. 1786–1793; Hoffmann-Riem (2017), p. 34; Rademacher (2017), pp. 398, 403–410.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Timo Rademacher .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Rademacher, T. (2020). Artificial Intelligence and Law Enforcement. In: Wischmeyer, T., Rademacher, T. (eds) Regulating Artificial Intelligence. Springer, Cham. https://doi.org/10.1007/978-3-030-32361-5_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-32361-5_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-32360-8

  • Online ISBN: 978-3-030-32361-5

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics