Skip to main content

Electronic Evidence and Electronic Signatures: Chapter 4

Electronic Evidence and Electronic Signatures
Chapter 4
    • Notifications
    • Privacy
  • Project HomeElectronic Evidence and Electronic Signatures
  • Projects
  • Learn more about Manifold

Notes

Show the following:

  • Annotations
  • Resources
Search within:

Adjust appearance:

  • font
    Font style
  • color scheme
  • Margins
table of contents
  1. Cover
  2. Title Page
  3. Copyright Page
  4. List of Contributors
  5. A note on our Creative Commons licence
  6. Dedication
  7. Contents
  8. Software is reliable and robust
  9. Preface
  10. Acknowledgments
  11. Table of statutes
  12. Table of cases
  13. 1. The sources and characteristics of electronic evidence and artificial intelligence
    1. Digital devices
      1. Processors
      2. Mobile devices
      3. Embedded devices
      4. Software
      5. Data storage facilities
      6. Data formats
      7. Starting a computer
    2. Networks
      1. Types of network
      2. Cloud computing
      3. The Internet of Things
      4. The deep web and the dark web
      5. Common network applications
    3. Types of evidence available on a digital device
      1. Files
      2. Metadata
      3. Imaging
      4. System and program logs
      5. Temporary files and cache files
      6. Deleted or ‘lost’ files
      7. Simulations, data visualizations, augmented and virtual reality
      8. Encryption and obfuscated data
    4. Artificial intelligence and machine learning
      1. Simulations, data visualizations, augmented and virtual reality
      2. Transparency and explainability
      3. AI adversarial attacks
    5. Defining electronic evidence
      1. The dependency on machinery and software
      2. The mediation of technology
      3. Speed of change
      4. Volume and replication
      5. Storage and disclosure
    6. Concluding remarks
  14. 2. The foundations of evidence in electronic form
    1. Direct and indirect evidence
    2. Evidence in both digital and analogue form
    3. Metadata and electronic evidence
    4. Means of proof
      1. Testimony and hearsay
      2. Real evidence
    5. Documents and disclosure or discovery
    6. Visual reading of a document
    7. Authentication
    8. Best evidence
    9. Analogue evidence
    10. Digital evidence
    11. Civil proceedings
    12. Criminal proceedings
    13. Admissibility
    14. Weight
    15. Video and audio evidence
      1. Testimonial use in legal proceedings
      2. Identification and recognition evidence
    16. Computer-generated animations and simulations
      1. Computer-generated evidence in England and Wales: civil proceedings
      2. Computer-generated evidence in England and Wales: criminal proceedings
  15. 3. Hearsay
    1. The rule of hearsay exclusion and its rationale
    2. The right of confrontation
    3. Hearsay and electronic evidence
    4. Electronic evidence and real evidence
    5. Testimonial and non-testimonial use of information
    6. Implied assertions
    7. Civil proceedings and the requirement to give notice
    8. Criminal proceedings
      1. Telephone calls and messages
      2. Representations other than by a person
      3. Body-worn camera footage
      4. Business and other documents
      5. Judicial discretion to include hearsay
      6. Judicial discretion to exclude hearsay
    9. Concluding observations
  16. 4. Software code as the witness
    1. The classification of digital data
      1. Category 1: Content written by one or more people
      2. Category 2: Records generated by the software that have not had any input from a human
      3. Category 3: Records comprising a mix of human input and calculations generated by software
    2. Challenging the code to test the truth of the statement
  17. 5. The presumption that computers are ‘reliable’
    1. The purpose of a presumption
    2. Presumptions and mechanical instruments
    3. Judicial formulations of the presumption that mechanical instruments are in order when used
      1. Judicial notice
      2. A ‘notorious’ class
      3. Common knowledge
    4. Evidential foundations of the presumption
    5. How judges assess the evidence of devices controlled by software
    6. Mechanical instruments and computer-like devices
    7. The nature of software errors
      1. Why software appears to fail
      2. Classification of software errors
    8. The development, maintenance and operation of software
      1. Developmental issues and software errors
      2. Increasing the risk of errors through modification of software
      3. Security vulnerabilities
      4. Software testing
      5. Writing software that is free of faults
      6. Software standards
      7. Summary
    9. Challenging ‘reliability’
      1. Aviation
      2. Financial products
      3. Motor vehicles
      4. Emergency services
      5. Medical
      6. The Post Office Horizon scandal
      7. Banking
      8. Interception of communications
    10. Most computer errors are either immediately detectable or result from input errors
    11. Challenging the authenticity of digital data – trial within a trial
      1. A protocol for challenging software in devices and systems
    12. Reintroduction of the common law presumption
    13. The statutory presumption
    14. Challenging the presumption
      1. ‘Working properly’
    15. Concluding remarks
  18. 6. Authenticating electronic evidence
    1. Authenticity and authentication
      1. An example: email
      2. Digital evidence compared to past paradigms
      3. Admissibility and authentication
      4. The best evidence rule
      5. Identity and integrity
      6. Reliability
    2. Methods of authentication
      1. Self-authentication
      2. System authentication
      3. Digital certification
      4. Digital forensics
      5. Extrinsic and circumstantial evidence
      6. Judicial notice
      7. Digital evidence in archival systems
    3. Technological authentication
      1. Digital signatures
      2. Blockchain
    4. Challenges to the authenticity of evidence in digital form
      1. The cloud
      2. The Internet of Things
      3. Digital preservation
      4. Migration and format changes
    5. The business records exception to the rule against hearsay
      1. The business records exception
      2. Authentication of digital business records
    6. Conclusion
  19. 7. Electronic signatures
    1. The purpose of a signature
    2. Dictionary definitions
    3. The manuscript signature
    4. Statutory definition of signature
    5. The functions of a signature
      1. The primary evidential function
      2. Secondary evidential functions
      3. Cautionary function
      4. Protective function
      5. Channelling function
      6. Record-keeping function
    6. Disputing a manuscript signature
      1. Defences
      2. Evidence of the manuscript signature
      3. Intention to authenticate and adopt the document
    7. The electronic signature
    8. Forms of electronic signature
      1. Authority, delegation and ratification
      2. Forged signatures
    9. Evidence of intent to sign
      1. The automatic inclusion of the signature
      2. Partial document with separate signature page
    10. The Electronic Communications Act 2000
      1. The definition of an electronic signature
      2. The elements of an electronic signature
      3. Liability of a certification service provider
      4. The power to modify legislation
      5. Regulation of Investigatory Powers Act 2000
    11. Electronic sound
    12. The ‘I accept’ and ‘wrap’ methods of indicating intent
      1. Click wrap
      2. Browse wrap
      3. ‘I accept’
    13. Personal Identification Number (PIN) and password
    14. Typing a name into an electronic document
      1. Acts by a lawyer as agent
      2. Interest in real property
      3. Loan of money
      4. Employment
      5. Contract
      6. Guarantees and debt
      7. Public administration, the judiciary and the police
      8. Statute of Frauds
      9. Wills
      10. Constitution of a legal entity
      11. Amending boilerplate contractual terms
    15. The name in an email address
      1. Limitation Act 1969 (NSW)
      2. Statute of Frauds
      3. Legal fees arrangement
      4. Civil Law Act
    16. A manuscript signature that has been scanned
      1. Mortgage redemption
      2. Writing
      3. Employment
    17. Biodynamic version of a manuscript signature
      1. Electoral register
      2. Contract formation
    18. Digital signatures
      1. Technical overview of digital signatures
      2. Algorithms and keys
      3. Control of the key
      4. Disguising the message
      5. Public key infrastructure
      6. Difficulties with public key infrastructure
      7. Authenticating the sender
      8. The ideal attributes of a signature in electronic form
      9. Methods of authentication
      10. Types of infrastructure for asymmetric cryptographic systems
      11. Management of the key and certificate
      12. The duties of a user
      13. Internal management of a certification authority
      14. Barriers to the use of the public key infrastructure
      15. Risks associated with the use of digital signatures
      16. What a digital signature is capable of doing
      17. What no form of electronic signature is capable of doing
      18. The weakest link
      19. The burden of managing the private key
      20. Evidence and digital signatures
      21. ‘Non-repudiation’
      22. Certifying certificates
      23. The burden of proof
      24. The recipient’s procedural and due diligence burden
      25. The sending party: the burden of proof of security and integrity
      26. Burden of proof – the jitsuin
      27. Burden of proof – summary
  20. 8. Encrypted data
    1. Encryption
    2. Methods to obtain encrypted data
      1. Breaking the encryption without obtaining the key
      2. Obtaining the key
    3. Compelling disclosure in England and Wales
      1. Protected information
      2. Notice requiring disclosure
      3. Obligations of secrecy and tipping off
      4. Circumventing the procedure
    4. The privilege against self-incrimination
      1. England and Wales
      2. The USA
      3. Canada
      4. Belgium
    5. Concluding observations
  21. 9. Proof: the technical collection and examination of electronic evidence
    1. Accreditation of the digital forensics discipline
    2. Guidelines for handling digital evidence
    3. Handling electronic evidence
      1. Identifying electronic evidence
      2. Gathering electronic evidence
      3. Gathering of data following legal retention or reporting obligations
      4. Copying electronic evidence
    4. Forensic triage
      1. Preserving electronic evidence
    5. Analysis of electronic evidence
      1. Tools
      2. Traces of evidence
    6. Reporting
    7. Analysis of a failure
    8. Anti-forensics and interpretation of evidence
      1. Data destruction
      2. Falsifying data
      3. Hiding data
      4. Attacks against computer forensics
      5. Trail obfuscation
    9. An intellectual framework for analysing electronic evidence
    10. Conclusions and future considerations
  22. 10. Competence of witnesses
    1. The need for witnesses
    2. Separating data reliability from computer reliability
    3. Lay experts as witnesses
    4. Qualification of witnesses
  23. Appendix 1: Draft Convention on Electronic Evidence
  24. Appendix 2: Cumulative vignettes
  25. Index

4

Software code as the witness

Stephen Mason

4.1 The aim of this chapter is to illustrate how software code can affect the examination and introduction of electronic evidence in legal proceedings. The topic is considered in the context of software code as the ‘witness’. It is important to understand how software can affect an assessment of the truth in any given set of facts, and software code can be written deliberately to deceive.1 Failure to appreciate this can lead to unfairness in legal proceedings and incorrect decisions.

Stephen Mason, ‘Software code as the witness’, in Stephen Mason and Daniel Seng (eds.), Electronic Evidence and Electronic Signatures (5th edn, University of London 2021) 112–125.

1See the example of Volkswagen AG, Audi AG and Volkswagen Group of America, Inc, described in detail in Chapter 5; for the US, see the excellent article by Andrea Roth, ‘Machine testimony’ (2017) 126 Yale LJ 1972.

4.2 A digital computer is like a mechanical device, where switches replace gears, and the switches are miniaturized. However, it is impossible to build a mechanical device that reflects the functionality of a modern digital computer, because such a device would require both a machine built on a colossal scale and the use of materials beyond the strengths or machine tolerances of what is possible to manufacture mechanically. To complete the picture, physical digital devices, as indicated in Chapter 1, cannot work without the software written by programmers and the input by users.

4.3 It follows that electronic evidence could be treated as a joint statement that is:

(i) partly made by the person inputting data (such as typing an email or word document, inserting a PIN, filling in forms over the Internet – in essence anything a person does when interacting with a device), and

(ii) partly made by the hundreds of programmers who are responsible for writing the software that produces the data.

4.4 For this reason, there is an argument, as proposed by Steven W. Teppler, that all forms of evidence in digital form remain hearsay,1 because software code conveys information.2 Teppler3 gives us the example of United States Patent Office Number 5,619,571, which includes some uncompiled source code that contains the following lines of code in the application:

 

1Assistant Professor Andrea Roth points out, in ‘Machine testimony’ (2017) 126 Yale LJ 1972 at 1980, that ‘the hearsay rule itself could not easily be modified to accommodate machines, given its focus on the oath, physical confrontation, and cross-examination’. This must be right.

2Steven W. Teppler, ‘Testable reliability: a modernized approach to ESI admissibility’ (2014) 12(2) Ave Maria Law Review213.

3Steven W. Teppler, ‘Digital data as hearsay’ (2009) 6 Digital Evidence and Electronic Signature Law Review 7.

4U.S. Patent No. 5,619,571 (issued Apr. 8, 1997), 17–18, lines 10–14.

4.5 What this comment indicates is an acknowledgment of the possibility of a weakness in the software code that has been written, not that the software code is or will be at fault. In this regard, it is useful to understand more fully the nature of source code. For instance, Svein Willassen explains the complex nature of software as follows:

Software is written as source code. The source code is written by the programmer, by entering instructions in an editor. The sequence of instructions defines the function of the program, such as taking input from the user, performing calculations, showing output on the screen and so on. This source code is then usually compiled into an executable program (an executable file causes a computer to perform tasks in accordance with the instructions), which is distributed to the users of the program. The source code cannot be derived completely from the executable program.1

1Svein Yngvar Willassen, ‘Line based hash analysis of source code infringement’ (2009) 6 Digital Evidence and Electronic Signature Law Review 210.

4.6 In the Australian case of Computer Edge Pty Limited v Apple Computer Inc,1 Gibbs CJ offered the following explanation of the various parts of a computer program:

A computer program is a set of instructions designed to cause a computer to perform a particular function or to produce a particular result. A program is usually developed in a number of stages. First, the sequence of operations which the computer will be required to perform is commonly written out in ordinary language, with the help, if necessary, of mathematical formulae and of a flow chart and diagram representing the procedure. In the present case if any writing in ordinary language (other than the comments and labels mentioned below) was produced in the production of Applesoft and Autostart, no question now arises concerning it. Next there is prepared what is called a source program. The instructions are now expressed in a computer language—either in a source code (which is not far removed from ordinary language, and is hence called a high level language) or in an assembly code (a low level language, which is further removed from ordinary language than a source code), or successively in both. Sometimes the expression ‘source code’ seems to be used to include both high level and low level language. In the present case, the source programs were written in an assembly code, comprising four elements, viz.:

(a) labels identifying particular parts of the program;

(b) mnemonics each consisting of three letters of the alphabet and corresponding to a particular operation expressed in 6502 Assembly Code (the code used);

(c) mnemonics identifying the register in the microprocessor and/or the number of instructions in the program to which the operation referred to in (b) related; and

(d) comments intended to explain the function of the particular part of the program for the benefit of a human reader of the program.

The writing has been destroyed, although it is possible to reconstruct the mnemonics, but not the labels and comments, which were comprised in it.

The source code or assembly code cannot be used directly in the computer, and must be converted into an object code, which is ‘machine readable,’ i.e. which can be directly used in the computer. The conversion is effected by a computer, itself properly programmed. The program in object code, the object program, in the first instance consists of a sequence of electrical impulses which are often first stored on a magnetic disk or tape, and which may be stored permanently in a ROM (‘read only memory’), a silicon chip which contains thousands of connected electrical circuits. The object code is embodied in the ROM in such a way that when the ROM is installed in the computer and electrical power is applied, there is generated the sequence of electrical impulses which cause the computer to take the action which the program is designed to achieve. The pattern of the circuits in the ROM may possibly be discerned with the aid of an electron microscope but it cannot be seen by the naked eye. Obviously, the electrical impulses themselves cannot be perceived. However the sequence of electrical impulses may be described either in binary notation (using the symbols 0 and 1) or in hexadecimal notation (using the numbers 0–9 and the letters A–F), and it is possible to display the description on the visual display unit of the computer, and to print it out on paper. And, as has been said, it is also possible to reconstruct the mnemonics in the source code. It will have been seen from this account that a program exists successively in source code and in object code, but the object code need not be written out in binary or hexadecimal notation in the process of producing and storing the program.2

1[1986] F.S.R. 537.

2[1986] F.S.R. 537 at 541–542.

4.7 The term ‘source code’ is also the subject of a commentary in the case of Ibcos Computers Ltd v Barclays Mercantile Highland Finance Ltd1 by Jacob J:

The program the human writes is called the ‘source code.’ After it is written it is processed by a program called a compiler into binary code. That is what the computer uses. All the words and algebraic symbols become binary numbers. Now when a human writes he often needs to make notes to remind himself of what he has done and to indicate where the important bits are. This is true of life generally and for programmers. So it is possible to insert messages in a source code. A reader who has access to it can then understand, or understand more readily, what is going on. Such notes, which form no part of the program so far as the computer is concerned, are called ‘comments.’ They are a kind of side-note for humans. In the DIBOL and DBL programs with which I am concerned, a line or part of a line of program which is preceded by a semi-colon is taken by the complier as a comment. That line is not translated by the compiler into machine code. The program would work without the comment. It follows that although computers are unforgiving as to spelling in their programs, they do not care about misspelt comments in the source code. If a line of operational code (a ‘command line’) is modified by putting a semi-colon in front of it, it ceases to be operational. The computer treats the code as a mere comment. Computer programmers sometimes do this with a line which pre-exists when they no longer want that line, but are not sure they may not need it in the future. Or, if the programmer thinks he may want to add a feature to his program in the future he may put in a comment allowing for this. He is unlikely in the latter instance to put in detailed code only to comment it out. A general note will do.

Source code, being what humans can understand, is very important to anyone who wants to copy a program with modifications, for instance to upgrade it. It is the source code which shows the human how it all works, and he or she will also get the benefit of all the comments laid down by the original programmer. Software houses not surprisingly normally keep their source code to themselves and confidential.2

1[1994] 2 WLUK 353, [1994] FSR 275, [1998] Mason CLR Rep 1, [1995] CLY 854.

2[1994] FSR 275 at 286.

4.8 There is a distinction between the code written by programmers that provides instructions to the computer and the comments made by the programmer writing the code. If the software code is inaccurate, or if an instruction written by a programmer acts on information or a further instruction that is incorrect, then the code will probably fail to instruct the computer in the way the programmer intended. However, comments by a programmer that do not form part of the instructions cannot necessarily be considered to be part of the code.

The classification of digital data

4.9 The starting point for this analysis is an attempt at classifying software code as digital data. To this end, Professor Ormerod, the commentator in a report on the case of R. v Skinner (Philip),1 suggested there were three questions to consider for every type of digital data:

(i) Who or what made the representation.

(ii) Whether the representation was hearsay or not.

(iii) Whether the evidence is authentic.2

1[2005] EWCA Crim 1439, [2005] 5 WLUK 506, [2006] Crim LR 56.

2David C. Ormerod, ‘Evidence: information copied from one website to another’ [2006] Crim LR 56.

4.10 In Elf Caledonia Ltd v London Bridge Engineering Ltd, Lord Caplan noted the following:

The defenders suggested that there are three categories of use for computers. They can be used to record data without the need of human intervention. The Spectra-Tek programme was described as being of this type. It was said that what this programme prints out may be regarded as real evidence. However Counsel had to concede that even this type of computer exercise depends on the reliability of the material programme. Unless it is properly programmed it will not store and regurgitate facts accurately …

Another category of computer use was said to be where data is recorded by the computer and the data is put in manually. Thus Piper would regularly send information to the beach and this would be entered in the computer system. It was accepted that to prove this material would involve some hearsay evidence unless the persons who entered the material in the computer were led as witnesses. However the defenders did not explore just what evidence would be required in the situation under consideration. In general it seems to me that there must be many cases where it would not be practicable to lead the person who generated the data and the person who fed it into the computer so that there must be some practical limits as to what proof can be expected in this kind of computer evidence.

It was submitted that the third type of computer situation is where the computer is used by experts to carry out calculations or simulations. It was claimed that in this kind of situation the general rules relating to expert evidence should be applied. Certainly in this kind of situation one can get a distorted result if one factor is in-putted wrongly. The kind of computer models used by experts of course generally requires more than normal discrimination and judgment in the selection of in-put material. Thus the expert will have to prove how the input material was arrived at and the justification for selecting what was put in. However I am not sure that the three categories of computer exercise referred to by the defenders’ Counsel can be distinguished quite as neatly as he attempts. Even in a simple office system distorted results will arise if the proper material is not fed into the computer. Thus it was argued that the first requirement in considering computer evidence given by an expert is to consider the input. That may be so but it cannot be exclusive to expert computer evidence. Of course it was said that the best evidence of in-put and out-put material is in the print-outs of such material.1

1[1997] ScotCS 1, 898–900, sub nom Elf Enterprise Caledonia Ltd v London Bridge Engineering Limited [1997] ScotCS 1, 2.

4.11 Based on this categorization, Professor Ormerod noted that some types of computer-generated representations do not infringe the hearsay rule.1 If a computer carries out the instructions of the program that has been written by humans to create such data, it may be right to suggest that such data are probably accurate without the need to test whether they are correct. But if the time as noted by a clock on a camera linked to an ATM is to be offered into evidence to link the accused to the murder of the person whose card was used in the ATM, then the time as data will have to be adduced as to its truth, as in the case of Liser v Smith,2 and there will be a need to validate the clock, and verify the time and date set by a human being.3

1Although he accepted that s 129 of the Criminal Justice Act 2003 may need to be considered. For a commentary on s 129, see John R. Spencer, Hearsay Evidence in Criminal Proceedings (Hart Publishing 2008) ch 3.

2254 F.Supp.2d 89 (D.D.C. 2003).

3Colin Tapper, ‘Reform of the law of evidence in relation to the output from computers’ (1995) 3 Intl J L & Info Tech 79, 85 fn 44.

4.12 To the same end, Professor Smith distinguished between the types of representation that the code in a device can make,1 and argued that where the computer is instructed to perform certain functions, many of which are performed in a mechanical way (such as the addition of the time and date on an email), in such circumstances the computer is producing real evidence, not hearsay. In illustrating the point he was making, Professor Smith gave a number of examples where evidence is not hearsay.2 One example was that of Six’s thermometer (commonly known as a maximum minimum thermometer), which he referred to as an instrument and not a machine. This is correct. The thermometer provides three readings: the current temperature, and the highest and the lowest temperatures reached since it was last reset. A human being can give evidence of his observation of the precise location of the mercury against the scale at a given time and date. The witness might be challenged as to the truthfulness of his recollection without calling into question the accuracy of the instrument. Such evidence will not be hearsay. Alternatively, the precision of the scale on the thermometer might be open to scrutiny, in which case it will be necessary to have the instrument tested by an appropriately qualified expert.3

1J. C. Smith, ‘The admissibility of statements by computer’ [1981] Crim LR 387.

2Smith, ‘The admissibility of statements by computer’, 390.

3This was also discussed by Penelope A. Pengilley, ‘Machine information: is it hearsay?’ (1982) 13(4) MULR 617, 625.

4.13 Further examples considered by Professor Smith included a camera that records an image, a tape recorder that records sound and a radar speedmeter that records the speed of a vehicle. In 1981, each of these machines was mechanical in construction, with the exception of the radar speedmeter, which also incorporated components that were instruments. None of the examples involved devices controlled by software written by human beings. Although it is possible to alter the image from a camera or the sound from a tape recording, or for a human being to lie about the reading from a radar speedmeter, nevertheless the evidence from such devices would not be hearsay.

4.14 In respect of software, Professor Smith indicated that a programmer may make mistakes (errors are common, for which see Chapter 5 on ‘reliability’), but mistakes can also be made when deciding the scale on a thermometer. He went on to suggest that ‘[t]‌his consideration goes to weight rather than admissibility. In any event it certainly has nothing to do with the hearsay rule.’1

1Smith, ‘The admissibility of statements by computer’ 390. One answer to this issue has been proposed by Professor Pattenden – that s 129(1) of the Criminal Justice Act 2003 be replaced ‘with a single test of admissibility for all factual representations that are not in substance the statement of a person but “machinespeak”, that is, those whose content is the outcome of creating machine-processing’: Rosemary Pattenden, ‘Machinespeak: section 129 of the Criminal Justice Act 2003’ [2010] Crim LR 623, 636–637. Professor Pattenden discusses the conflicting opinions relating to s 129(1) in detail.

4.15 Professor Seng proposed an analysis in 1997:

Computers which are used as data processing devices can be classified into the following categories: devices which accept human-supplied input and produce output, self-contained data processing devices which obtain input or take recordings from the environment without human intervention, and a hybrid of the two.1

1Daniel Seng, ‘Computer output as evidence’ [1997] SJLS 130, 173.

4.16 Steven Teppler also accepted that it is possible to categorize data into three types, treating digital data as hearsay:

(i) The memorandum ‘created’ by a human.

(ii) Digital data generated in part with human assistance.

(iii) Digital data generated without a human being.1

1Teppler, ‘Testable reliability’, 235–240.

4.17 Teppler has also suggested that a ‘fourth potential category, for which there has been no judicial analysis, has recently emerged as a consequence of computer programs that “listen and respond” to questions in natural language and with a “voice” that closely mimics a “real” human’.1 Arguably, this category fits into category three, for which see below.

1Teppler, ‘Testable reliability’, 235.

4.18 The authors of Archbold have also divided digital data into three categories:

(i) Where the device is used as a processor of data.

(ii) Where the software records data where there is no human input involved.

(iii) Where there is data recorded and processed by software that has been entered by a person, directly or indirectly. 1

1Archbold: Criminal Pleading, Evidence and Practice (Sweet & Maxwell), 9–11, 9–14.

4.19 It is proposed that the three categories outlined by Professor Seng, Steven Teppler and the authors of Archbold be slightly amended to read as follows:

(i) Content written by one or more people (that is, where the device is used as a processor of data).

(ii) Records generated by the software that have not had any input from a human.

(iii) Records comprising a mix of human input and calculations generated by software.

Each of these categories is discussed below.

Category 1: Content written by one or more people

4.20 Records of electronic content that are written by one or more people include email messages, word processing files and instant messages. Unless the author of the software has included instructions to alter the content of the text that has been typed in by a human, the only function of the device is to store the information that has been input by the human being. However, Teppler suggests that all computer-generated information is hearsay of some sort, and that the data generated by an email program, for instance, remains hearsay because:

the receiving computer is carrying out the stated intent or declaration of some person who instructed the computer to make the assertion on his or her behalf (e.g., a programmer) to carry out some request (and provided that certain conditions are met) that the receiving computer was told by the sending computer as agent for that person, which in turn was requested by a statement or declaration of the person or sender.1

1Teppler, ‘Testable reliability’, 240.

4.21 Conceptually this must be right, but the status of the instructions issued by the software code at the material time is rarely relevant. This category, artificial as it might appear to be, enables content that was input by the maker of the statement to be separated from content made by the author of the software program – in the same way that the printed notepaper with the name of the person or organization, together with other information such as address and telephone number, is created by the printer, but is distinct from the content of the letter.

4.22 The content of the software program will not be relevant unless there is a dispute as to what data were entered, when and where they were entered, and by whom. In such circumstances, the relevant witnesses can be called to give oral evidence to determine the truth, failing which a suitably qualified digital evidence practitioner might be called to give evidence about the metadata associated with the document to help ascertain answers to these technical questions.

4.23 By way of example, consider whether a letter typed into a computer is a document produced by a computer. Professor Smith took the view that if the human author printed the document and then read the contents to verify the text, the author authenticates the text. Given this set of facts, the computer is a mere tool. Where the author does not read the printout, the document remains computer output.1 Professor Seng suggests that ‘it is difficult to see how reading what is clearly a computer-produced document converts it into one not produced by a computer. The printout remains clearly a document produced by a computer operated as a data storage device.’2 Professor Smith indicates that the person can authenticate the text after it has been printed. This does not mean that the act of authentication takes away the fact that the document was created on and remains stored on the device. This distinction can be important, as in the case of electronic wills. The court must establish whether, in the absence of the testator authenticating the will, the testator actually wrote the will and intended it to be their last will and testament. In such cases, it might also be necessary to give consideration to both the content written by the human and the software code that makes up the metadata.

1R. v Shephard (Hilda) [1993] Crim LR 295 (note), 297–298.

2Seng, ‘Computer output as evidence’ 178 – Professor Seng begins his discussion (at 177) by asking whether word-processed documents are computer output or recorded computer output.

4.24 Professor Tapper pointed out that computers include such facilities as spell checkers, calculators and automatic paragraph numbering, among other tools. This suggests that a word file (such as a letter) is processed computer output.1 In his discussion, Professor Smith also discussed the same document being produced by a human typing on a typewriter. If the text – for the sake of illustration, a letter – is written by hand, or typed on a typewriter, or typed into a computer, the resultant content will be the same, other than the type of print, typeface and such like used, although the author might cause the data to remain stored on the device if it was a computer.2 The person writing the letter by hand or on a typewriter might use a dictionary to check their spelling in the same way that spelling can be checked on a computer using the spell checker. Whether the letter is written by hand, typed on a typewriter or on a computer, the letter will then be complete when printed (in the case of the computer) on paper. The method used to record words on paper must be irrelevant, providing that the only evidence to be relied upon is the text that is recorded on the paper. If other factors are in issue, such as the purported author of the document, then clearly an examination of the digital data might be instructive. Professor Seng takes issue with Professor Smith’s characterization that the evidential quality of a letter changes immediately when a recipient reads it, without taking into account any characterization of its source. In such a case, where the computer is behaving as a storage device, the rebuttable presumption is that the code operating to make it behave as such is reliable, and issues as to authentication of this code do not enter the evidential analysis, generally speaking. But there can be other software errors, for which see Chapter 5 on the ‘reliability’ of computers.

1Tapper, ‘Reform of the law of evidence in relation to the output from computers’, 86–88.

2A point made by Professor Seng, ‘Computer output as evidence’, 178.

4.25 The Law Commission in their report1 noted that the ‘present law draws a distinction according to whether the statement consists of, or is based upon, only what the machine itself has observed; or whether it incorporates, or is based upon, information supplied by a human being’.2 It was further noted that the hearsay rule did not apply to tapes, films or photographs, or to documents produced by machines that automatically record an event or circumstance.3 This was because the court is not being asked to accept the truth of an assertion made by any person, and the evidence is real evidence, not hearsay.

1Law Commission, Evidence in Criminal Proceedings: Hearsay and Related Topics (Law Com No 245, 1997).

2Law Commission, Evidence in Criminal Proceedings: Hearsay and Related Topics (Law Com No 245, 1997) para 7.43.

3Law Commission, Evidence in Criminal Proceedings: Hearsay and Related Topics (Law Com No 245, 1997) para 7.44.

4.26 That humans generally have control over a computer system is demonstrated in the case of Ferguson v British Gas Trading Limited,1 in which the members of the Court of Appeal rejected arguments submitted that letters sent out automatically by a computer were not the fault of British Gas. Computers only work on instructions given to them, and it followed that a person in British Gas, or authorized by British Gas, must have instructed the computer to initiate the letters in question. In this case, British Gas sent letters to the claimant that the court held were capable of amounting to unlawful harassment contrary to the Protection from Harassment Act 1997. In the words of Jacob LJ: ‘British Gas says it has done nothing wrong; that it is perfectly all right for it to treat consumers in this way, at least if it is all just done by computer.’2 Jacob LJ went on to indicate that he did not follow the reasoning of Martin Porter QC, counsel for British Gas, that ‘[as] the correspondence was computer generated … [the harassed victim] should not have taken it as seriously as if it had come from an individual’.3 Jacob LJ noted that computers operate on instructions given to them: ‘real people are responsible for programming and entering material into the computer. It is British Gas’s system which, at the very least, allowed the impugned conduct to happen.’4 Likewise, Sedley LJ roundly rejected the pathetic excuse offered by British Gas:

One excuse which has formed part of British Gas’s legal argument for striking out the claim, and which has been advanced as incontestable and decisive, is that a large corporation such as British Gas cannot be legally responsible for mistakes made either by its computerised debt recovery system or by the personnel responsible for programming and operating it. The short answer is that it can be, for reasons explained by Lord Justice Jacob. It would be remarkable if it could not: it would mean that the privilege of incorporation not only shielded its shareholders and directors from personal liability for its debts but protected the company itself from legal liabilities which a natural person cannot evade. That is not what legal personality means.5

1[2009] EWCA Civ 46, [2010] 1 WLR 785, [2009] 3 All ER 304, [2009] 2 WLUK 206, (2009) 106(8) LSG 18, (2009) 153(7) SJLB 34, [2009] CLY 3959.

2[2009] EWCA Civ 46 at [5]‌.

3[2009] EWCA Civ 46 at [21].

4[2009] EWCA Civ 46 at [21].

5[2009] EWCA Civ 46 at [51].

Category 2: Records generated by the software that have not had any input from a human

4.27 Examples of records generated by software controlling a computer without any input from a human include computer data logs for the purposes of tracking activity and diagnostics, number plate recognition software,1 automatic connections made by telephone switches and the records of such calls made for billing purposes,2 records of ATM transactions, machine translation,3 and objects connected to the Internet, known as Internet of Things.4 In one case, Antonio Boparan Singh was convicted of dangerous driving. Part of the evidence adduced by the prosecution included evidence from the event data recorder (EDR) – a device fitted to the airbag system of his vehicle. The EDR established that a force equivalent to 42 mph was lost in one-fifth of a second in the crash. This information helped the police to put Singh’s speed at around 72 mph.5

1https://www.police.uk/pu/advice-crime-prevention/automatic-number-plate-recognition-anpr/; for judicial consideration of automatic number plate recognition, see R. v Jackson (Royston) [2011] EWCA Crim 1870, [2011] 7 WLUK 643; Attorney General’s Reference (Nos 114 and 115 of 2009) [2010] EWCA Crim 1459, [2010] 6 WLUK 549; R. v Najib (Amaar) [2013] EWCA Crim 86, [2013] 2 WLUK 290; R. v Khan (Imran); R. v Mahmood (Amjed Khan); R. v Kajla (Jaspal) [2013] EWCA Crim 2230, [2013] 12 WLUK 57, [2014] Crim LR 520; R. v Welsh (Christopher Mark) [2014] EWCA Crim 1027, [2014] 5 WLUK 740. Interestingly, the absence of challenges to ANPR evidence in the English courts could be attributed to the fact that, for the large part, the defendants or the parties have admitted to the accuracy of such evidence and so no real dispute arises: see [125], Makdessi v Cavendish Square Holdings BV ParkingEye Ltd v Beavis [2015] UKSC 67, [2016] AC 1172, [2015] 3 WLR 1373, [2016] 2 All ER 519, [2016] 2 All ER (Comm) 1, [2016] 1 Lloyd’s Rep 55, [2015] 11 WLUK 78, [2015] 2 CLC 686, [2016] BLR 1, 162 Con LR 1, [2016] RTR 8, [2016] CILL 3769, Times, 23 November 2015, [2016] CLY 437, also known as Cavendish Square Holding BV v Makdessi, El Makdessi v Cavendish Square Holdings BV; also D (A Child) (Fact-finding Appeal), Re [2019] EWCA Civ 2302, [2019] 12 WLUK 409, [2020] 2 FCR 15, [2020] 7 CL 90, also known as M v X BC, at [32]. Even so, when a discrepancy arises in relation to ANPR evidence, as in the case of A (Death of a Baby), Re [2011] EWHC 2754 (Fam) [66] and [69], where there was testimonial evidence to corroborate the drivers’ testimony as to their movements and contradict the ANPR evidence, the independent verifiability of the vehicular movements coupled with the lack of authentication of the ANPR evidence led the court to exercise its discretion and choose to draw no conclusions from the ANPR evidence [158].

2Rosemary Pattenden, ‘Authenticating “things” in English law: principles for adducing tangible evidence in common law jury trials’ (2008) 12 E & P 273, suggests that ‘self-generated output’ can be categorized into two sub-divisions: output that contains no input from human thought, and output that draws directly or indirectly on information fed into the device by a person: 297; Julian Fulbrook, ‘Deadly distractions: mobile telephones and transport litigation’ (2018) 2 JPI Law 89, in which he cites Eyres v Atkinsons Kitchens & Bedrooms Ltd [2007] EWCA Civ 365, [2007] 4 WLUK 369, (2007) 151 SJLB 576, Times, 21 May 2007, [2007] CLY 2955.

3Nicole E. Crossey, ‘Machine translator testimony & the confrontation clause: has the time come for the hearsay rules to escape the stone age?’ (2020) 12 Drexal L Rev 561.

4David Caruso, Michael Legg and Jordan Phoustanis, ‘The automation paradox in litigation: the inadequacy of procedure and evidence law to manage electronic evidence generated by the “internet of things” in civil disputes’ (2019) 19 Macquarie LJ 157.

5Mark Cowan, ‘Crime files: picking up the pieces on Midland roads’ Birmingham Mail (Birmingham, 6 October 2010); an insurance company used data recorded from telematics technology installed in a motor vehicle to disprove 31 claims involving seven accidents over five months: Oliver Ralph, ‘Black box data expose £500,000 driver fraud’ Financial Times (London, 11 June 2016) 4; James Wade, ‘Emerging technologies in collision investigation’ (2016) 4 JPI Law 220; Amelia Murray, ‘A £90,000 bogus car insurance claim – and how the fraudsters were caught by their telematics box’, The Telegraph, 13 January 2018, https://www.telegraph.co.uk/insurance/car/90000-bogus-car-insurance-claim-fraudsters-caught-telematics/.

4.28 It does not follow that the automatic communications that occur between software code are accurate. For instance, the records from a telephone service provider might be admitted to show that calls were made and received,1 but it does not follow that the same records can be used as a basis for showing that a SIM card used in a mobile telephone, and purportedly its user,2 were at a particular location or moved from location to location.3

1For an analysis in the context of New Brunswick, Canada, see Her Majesty the Queen v Dennis James Oland 2015 NBQB 244 (third ruling); Her Majesty the Queen v Dennis James Oland 2015 NBQB 245 (fourth ruling) and the observations by David M. Paciocco, ‘Proof and progress: coping with the law of evidence in a technological age’ (2013) 11(2) Canadian Journal of Law and Technology 181, which in turn are disputed in Ken Chasse, ‘Guilt by mobile phone tracking shouldn’t make “evidence to the contrary” impossible’, http://www.slaw.ca/2016/10/04/guilt-by-mobile-phone-tracking-shouldnt-make-evidence-to-the-contrary-impossible/.

2Cell site analysis was the subject of discussion in R. v Jackson (Royston) [2011] EWCA Crim 1870, [2011] 7 WLUK 643; Reg Coutts and Hugh Selby, ‘Safe and unsafe use of mobile phone evidence’ (Public Defenders Criminal Law Conference, Sydney, March 2009), http://www.publicdefenders.nsw.gov.au/Documents/safeunsafemobilephones.pdf, recommend that defence lawyers pay particular attention to the explanation of cell site analysis set out by Blaxell J in The State of Western Australia v Coates [2007] WASC 307, [211]–[220]; R. P. Coutts and H. Selby, ‘Problems with cell phone evidence tendered to “prove” the location of a person at a point in time’ (2016) 13 Digital Evidence and Electronic Signature Law Review 76.

3Michael Cherry, Edward J. Imwinkelried, Manfred Schenk, Aaron Romano, Naomi Fetterman, Nicole Hardin and Arnie Beckman, ‘Cell tower junk science’ (2012) 95(4) Judicature 151, 151–52; Aaron Blank, ‘The limitations and admissibility of using historical cellular site data to track the location of a cellular phone’ (2011) 18(1) Rich J L & Tech 10; Judge Herbert B. Dixon Jr, ‘Scientific fact or junk science? Tracking a cell phone without GPS’ (2014) 53(1) Judges’ J 37; Graeme Horsman and Lynne R. Conniss, ‘Investigating evidence of mobile phone usage by drivers in road traffic accidents’ (2015) 12 Digital Investigation S30, S37; Alex Biedermann and Joëlle Vuille, ‘Digital evidence, “absence” of data and ambiguous patterns of reasoning’ (2016) 16 Digital Investigation S86, S94; for the case of Phuong Canh Ngo, see R v Ngo [2001] NSWSC 1021 (the sentence); R v Ngo [2003] NSWCCA 82 (appeal against conviction); David Patten (Judicial Officer Conducting Inquiry), Report to the Chief Justice of New South Wales (The Hon J J Spigelman AC) of the Inquiry into the Conviction of Phuong Canh Ngo for the murder of John Newman (14 April 2009), http://www.lawlink.nsw.gov.au/practice_notes/nswsc_pc.nsf/6a64691105a54031ca256880000c25d7/f1ef2541db38ae82ca25759b00052606/$FILE/Report_Phuong_Ngo_140409.pdf; Phuong Canh Ngo – Application under Part 7 Crimes (Appeal and Review) Act 2001 [2010] NSWSC 981 (hearing after Report published).

Category 3: Records comprising a mix of human input and calculations generated by software

4.29 An example of records comprising a mix of human input and calculations generated by software is that of a financial spreadsheet program that contains human statements (input to the spreadsheet program) and computer processing (mathematical calculations performed by the spreadsheet program). From an evidential point of view, the issue is whether the person or the software created the content of the record, and how much of the content was created by the software and how much by the human. It is possible that the quality of the software acts to undermine the authenticity of the data, which may in turn affect the truth of the statement tendered in evidence. The algorithms in spreadsheet programs are good examples of where the software code affects the truth of the statement. For a more detailed analysis, see Chapter 6 on authentication.

4.30 Professor Pattenden suggests that ‘most representations of fact require human intervention at some point’,1 which must be right. The Law Commission report also indicated:

By contrast, the law does sometimes exclude evidence of a statement generated by a machine, where the statement is based on information fed into the machine by a human being. In such a case, it seems, the statement by the machine is admissible only if the facts on which it is based are themselves proved.2

1Pattenden, ‘Machinespeak’, 633.

2Law Commission, Evidence in Criminal Proceedings: Hearsay and Related Topics (Law Com No 245, 1997) para 7.46.

4.31 This comment distinguishes between information fed into a machine (the word ‘computer’ is not used, but the word ‘machine’ is presumably meant to include a computer or computer-like device) and the instructions contained in software code written by human beings that are essential for a device to work. Where a person inputs information into a computer, and that information is to be relied upon as to the truth of the statement, then the person should give oral evidence of this action. In contrast, the software code that might be used to transform the raw data into information that can be used is not necessarily relevant, depending on the purpose for which it is adduced in evidence. To this end, the Law Commission1 compared the cases of R v Wood (Stanley William)2 and R v Coventry Justices, Ex p Bullard.3 In Wood, the evidence of the analysis by a computer of tests carried out by chemists was not considered to be hearsay because the chemists gave oral evidence of the results of the tests. The calculations performed by the computer were carried out under the instructions of the person who wrote the software code. The chemists were able to give oral evidence of the results of the tests they performed, but the computer software carried out the actual analysis. The calculations relied upon the software code, which was created by a human being (in this case, a Mr Kellie). The software analysed the data in accordance with the instructions given to it by Mr Kellie. The computer was not capable of analysing the data without the software code. The chemists gave oral evidence of the results of the computer program. This means that the truth of the content of the output of the computer was predicated upon the software code created by Mr Kellie.

1Law Commission, Evidence in Criminal Proceedings: Hearsay and Related Topics (Law Com No 245, 1997) para 7.47.

2[1982] 6 WLUK 191, (1983) 76 Cr App R 23, [1982] Crim LR 667, [1983] CLY 636.

3[1992] 2 WLUK 233, (1992) 95 Cr App R 175, [1992] RA 79 [1992] COD 285, (1992) 142 NLJ 383, Times, 24 February 1992, Independent, 26 February 1992, Guardian, 11 March 1992, [1992] CLY 2058; ‘Print-Out Inadmissible as Hearsay’ (1993) 57 JCL 232.

4.32 In comparison, the computer printout in R v Coventry Justices, Ex p Bullard included a statement that a person was in arrears with his community charge. This was held to be inadmissible hearsay because the content of the printout contained information that had been put into the computer by a human, and the printout had not been properly proved. The Law Commission, agreeing with the result, would propose a similar analysis as follows:

An alternative view is that the statement by the machine, properly understood, is conditional on the accuracy of the data on which it is based; and that, if those data are not proved to have been accurate, the statement therefore has no probative value at all. The question of hearsay does not arise, because the statement is simply irrelevant.1

1Law Commission, Evidence in Criminal Proceedings: Hearsay and Related Topics (Law Com No 245, 1997) para 7.48.

4.33 In Mehesz v Redman,1 Zelling J concluded that the output of an auto-lab data analyser was hearsay, given that the analysis relied on software where the writer of the software had not been called, and where modifications had been made but the person responsible for the modifications had not been called either. A similar decision was made in Holt v Auckland City Council,2 where evidence of the analysis of the amount of alcohol in a blood sample was excluded by the New Zealand Court of Appeal because the truth of the statement tendered was predicated upon the software code written by a programmer who was not called to give evidence, which meant there was a gap in the continuity of proof. In contrast, in Wood, the oral evidence of the results of the tests were read out by the chemists from printouts from the computer (which was real evidence), and if the results were to be challenged for their accuracy, then the integrity of the software program might need to be tested.

1(1979) 21 SASR 569.

2[1980] 2 NZLR 124.

4.34 The instructions written by a human in the form of software code can, depending on the circumstances, be just that: instructions to the machine to perform a particular task. This is illustrated in the case of Maynard.1 An item of software, called a trace, had been written to ascertain whether a particular employee was obtaining access to private information in a computer system, and if so, to record the time and date that the employee viewed the data. The employee was subsequently prosecuted. The magistrate refused to admit the evidence of the printout of the trace data, partly because he considered the record of the time and date to be hearsay. On appeal, Wright J rejected this analysis. The person who wrote the code gave evidence at trial, both as to the reason for writing the code and as to how it worked. The judge was of the following opinion:

it seems to me that once the trace was applied to the respondent’s log-on identification, the process then undertaken by the trace was entirely mechanical in that the peregrinations through the database by that computer user was automatically traced through the system and were recorded and stored ready for retrieval in report form as soon as the trace print-out was called for.2

1(1993) 70 A Crim R 133, sub nom Rook v Maynard (1993) 126 ALR 150.

2(1993) 70 A Crim R 133 at 141.

4.35 Wright J then went on to illustrate the separate steps:

Although much more complex in its operation than the following description suggests, the process, stripped to its essentials, involved (a) The implementation of the trace program and its attachment to the respondent’s log-on identification. This was a human function proved by direct evidence from Mr Poulter [the person who wrote the code]. (b) Once attached, the trace followed the log-on identification number and the user and (c) when the user tapped into or called up a particular file from the database, the trace was able to store details of this event in its memory for subsequent retrieval.1

1(1993) 70 A Crim R 133 at 142.

4.36 There was no evidence that suggested that the trace program modified any other programs in the computer, and if there were any such failings, the program designer could have been cross-examined on them. For this reason, the statement was not hearsay.

Challenging the code to test the truth of the statement

4.37 One of the most frequently mounted challenges to evidence in digital form is the admissibility of the output from breath-testing devices. Such challenges are attempted across jurisdictions, but the legislation put in place usually provides that where a device is authorized by an appropriate authority, judges do not have the power to require the prosecution to reveal the software code, or refuse to because, it is claimed, the defence do not provide sufficient evidence to support the challenge that the device might not be reliable.1 However, in State of New Jersey v Chun the Supreme Court in New Jersey in the United States ordered the software of a new breath-testing device – the Alcotest 7100 MK111-C – to be reviewed in detail and tested for scientific validity.2 After extensive testing, the court concluded that the Alcotest, using New Jersey Firmware version 3.11, ‘is generally scientifically reliable’, but ordered modifications to enable its results to be admitted into legal proceedings.3 The analysis of the source code indicated that there was a fault when a third breath sample was taken, which could cause the reading to be incorrect, and the court saw fit to order a change in one of the formulae used in the software. This is a significant decision because the court accepted, albeit implicitly, that the software that controlled the device, written by a human, was defective. This in turn meant that had the code not been remedied, the data relied upon for the truth of the statement would be defective and therefore this would affect the accuracy and truthfulness of the evidence.

1Peter Hungerford-Welch, ‘Disclosure: DPP v Walsall Magistrates’ Court’(2020) 4 Crim LR 335; DPP v Walsall Magistrates’ Court DPP v Lincoln Magistrates’ Court [2019] EWHC 3317 (Admin), [2019] 12 WLUK 61, [2020] RTR 14, [2020] Crim LR 335, [2020] ACD 21, [2020] 5 CL 43; R. (on the application of DPP) v Manchester and Salford Magistrates’ Court [2017] EWHC 3719 (Admin), [2019] 1 WLR 2617, [2017] 7 WLUK 154, also known as DPP v Manchester and Salford Magistrates’ Court.

2194 N.J. 54, 943 A.2d 114; an application authorizing the discovery of source code used in the Intoxilyzer 5000 breath test equipment failed for procedural reasons in State of Florida v Bjorkland, 924 So.2d 971 (Fla. 2d DCA 2006).

3943 A.2d 114 at 120.

Annotate

Next Chapter
Chapter 5
PreviousNext
All rights reserved
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org