Aug 182014
 

 

image

A nice article (that our researcher Lee Tobin contributed to) in The Irish Times newspaper on how wearable technology might impact our working lives. Here is an excerpt:

” …Wearable technologies are nothing new. The Google Glass consumer-oriented product allows owners to do many of the things they could do on their tablet with a pair of mock spectacles.

Then you have apps such as FitBit, which track your activity, so you can feel better about yourself as you walk home from the pub.

Likewise there are numerous wearable technologies used to monitor patients with various health conditions.

So who’s to say this kind of technology couldn’t be used in other settings? To time an employee’s smoke break, for example, or to alert the boss when someone is spending too much time chatting at the water cooler.

We are already living in the age of the “quantified self”. So it was only a matter of time before business bought in…”

 

Aug 032013
 

During cybercrime investigations it’s common to find that a suspect has used technology in a country outside of the territorial jurisdiction of Law Enforcement investigating the case. The suspects themselves may also be located outside of the territory of the investigating group. A country may be able to claim jurisdiction over a suspect or device that is located outside of their territory [1], however, foreign Law Enforcement would not have jurisdiction within the territorial jurisdiction of another country unless explicitly granted. This means that if a suspect or digital device is located in another territory, the investigating country may need to request assistance from the country that has territorial jurisdiction. This request could be in the form of mutual legal assistance requests, international communication channels such as INTERPOL and United Nations networks, through a personal contact within the country of interest, etc.

ID-10023344

It appears to be increasingly common that Law Enforcement will use personal contacts to quickly begin the investigation process in the country of interest and request data be preserved, while at the same time making an official request for cooperation through official channels. This is simply because official channels are currently far too slow to deal with many types of cybercrime that rely on preserving data before the records are overwritten or deleted; a problem that has been communicated by Law Enforcement for over a decade.

For similar reasons, Law Enforcement in many countries commonly access data stored on servers in countries outside of their jurisdiction. When and how they access this data is usually not well defined because law too, in most — if not all — countries, is failing to keep up with changes in cross-border digital crime. However, a recent work by the NATO Cooperative Cyber Defence Centre of Excellence — Tallinn Manual on the International Law Applicable to Cyber Warfare (Tallinn Manual) — attempted to explicitly state some of these issues and their practical implications, albeit in the context of Cyber Warfare.

In the Tallinn Manual the expert group considered issues of jurisdiction applied to cyber infrastructure. Of these considerations, they claim that “… States may exercise sovereign prerogatives over any cyber infrastructure located on their territory, as well as activities associated with that cyber infrastructure” [2] with some exceptions. Further, Rule 1 paragraph 8 stipulates that:

A State may consent to cyber operations conducted from its territory or to remote cybercrime operations involving cyber infrastructure that is located on its territory.

In this rule, the expert group gives the explicit example that a State may not have the technical ability to handle a situation within their territory, and thus may give permission for another State to conduct cyber activities within their jurisdiction.

Much of the discussion on sovereignty, jurisdiction and control stipulate the scope of control a State possesses; however, Rule 5 specifies the obligation of the State to other states. Specifically that “the principle of sovereign equality entails an obligation of all States to respect the territorial sovereignty of other States”. The expert group elaborates with Rule 5 paragraph 3 claiming that:

The obligation to respect the sovereignty of another State… implies that a State may not `allow knowingly its territory to be used for acts contrary to the rights of other States’.

Rule 5 paragraph 3 has interesting implications in cyber space. For example, the infrastructure of many different countries may be used in an attack against a single victim. Because of this rule, each country whose infrastructure was involved is obliged to not allow these attacks to continue once they are aware of such attacks. A State, however, is not necessarily obliged to actively look for attacks against other countries from its infrastructure.

In other words, if an attack is made from (or through) State A to State B, and State B makes State A aware of the attack, then State A is normally obliged to help in stopping — and presumably helping to investigate — the attack on State B, if possible.

The Tallinn Manual goes on with Rule 7 stating that an attack originating from a State is not proof of a State’s involvement, but “… is an indication that the Sate in question is associated with the operation”. However, instead of assuming that the State could be guilty, in this work we propose to assume the innocence of the state whose infrastructure is being used in an attack.

Let’s assume State B is affected by a cyber attack apparently originating from State A. State B then attempts to make State A aware of the attack. There is essentially one of three responses that State B will receive from State A: Response to collaborate, Response to not collaborate, or no response. In the case of no response if there is an assumption of innocence of State A, then State B may also assume that State A — being obliged to help — cannot stop the attacks because of lack of technical ability, resources, etc. In this way, consent to conduct remote cyber investigations on infrastructure within State A could potentially also be assumed.

In this way, when requests for assistance are made between States, if one State does not, or cannot, respond to the request, then cyber investigations can continue. Under this assumption, countries with intention to collaborate but limited investigation capacity, convoluted political and/or communication processes, or just no infrastructure will gain increased capacity to fight abuses of their infrastructure from countries that have more resources.

By assuming innocence of a state, at least four current problem areas can be improved. First, by assuming a State’s consent for remote investigation upon no reply to international assistance requests, this will lead to a reduction in delay during cross-border investigations for all involved countries despite weaknesses in bureaucratic official request channels. Second, such an assumption will force States to take a more active role in explicitly denying requests, if so desired, rather than just ignoring official requests, which is a waste of time and resources for everyone involved. Third, depending on the reason for the denial, such an explicit denial to investigate attacks against other countries would be slightly more conclusive proof of State A’s intention to attack, or allow attacks, on State B, and could potentially help where attack attribution is concerned. And finally, such an assumption may also hold where mutual legal assistance currently — and oftentimes — breaks down; when dual criminality does not exist between two countries [3].

Essentially, if an attack on Country B occurs from infrastructure in Country A, Country A will either want to help stop the attack or not. By assuming that Country A does want to help but is simply unable to, this forces Country A to be explicit about their stance on the situation while at the same time ensuring that international cybercrime investigations can be conducted in a timely manner.

James, J. I. (2013) “An Argument for Assumed Extra-territorial Consent During Cybercrime Investigations”. VFAC Review. Issue 25. [PDF]

Bibliography

  1. Malanczuk, P. (1997). Akehursts modern introduction to international law (7th ed.). Routledge.
  2. Schmitt, M. N. (Ed.). (2013). Tallinn Manual on the International Law Applicable to Cyber Warfare. Cambridge University Press.
  3. Harley, B. (2010). A Global Convention on Cybercrime?. Retrieved from http://www.stlr.org/2010/03/a-global-convention-on-cybercrime/

 

Image courtesy of jscreationzs / FreeDigitalPhotos.net

Apr 082013
 

In many police investigations today, computer systems are somehow involved. The number and capacity of computer systems needing to be seized and examined is increasing, and in some cases it may be necessary to quickly find a single computer system within a large number of computers in a network. To investigate potential evidence from a large quantity of seized computer systems, or from a computer network with multiple clients, triage analysis may be used.

The current challenges to conducting on-scene investigations is that each system must be booted and examined in turn, many investigation processes are not automated, multiple boot media may be needed, and there is no centralized point where results can be stored. All of these challenges can make the on-scene investigation process very time consuming if the network consists of hundreds of computers divided over several floors. Prior works had a number of foundational benefits, but still had a number of limitations that did not fit our needs. The approach taken in this work was to redesign an open source, forensically sound PXE environment that meets the following conditions:

  • Clients are able to boot a “forensic” file-system using DHCP, PXE and TFTP
  • Client–server based model: the client and server can communicate with each other
  • Network storage between clients and server, to serve files and store search results
  • Keyword searching in ASCII and UNICODE
  • File hashing and comparing with a centralized hash database
  • Clients are accessible through the server via SSH
  • Client’s local hard disk drives are accessible as a local disk on the server through ATA Over Ethernet (AoE)

This approach is adopted, not to conduct a full digital forensic investigation on-scene, but to conduct digital forensic triage. Triage is a medical term defined as:

A process for sorting injured people into groups based on their need for or likely benefit from immediate medical treatment. Triage is used in hospital emergency rooms, on battlefields, and at disaster sites when limited medical resources must be allocated (Triage, n.d.).

To derive the definition of digital forensic triage, we apply the medical definition specifically to computer forensics, resulting in:

A process of sorting computer systems into groups, based on the amount of relevant information or evidence found on these computer systems (Koopmans, 2010).

Based on this definition, the goal of the solution is not explicitly for exhibit exclusion purposes, but to sort analyzed systems by likely relevance.

The result is a client-server based solution for automation of basic digital forensic investigation processes on many clients over a network (Figure 1).

Automated Network Triage

A Triage server is placed on a network (preferably a network physically separate to the suspect’s network), and clients (suspect computers) are booted into a live environment via PXE or boot disk. They connect to the Triage server, load data and analysis scripts, and begin to conduct analysis on the suspect machine’s connected hard drives automatically. All results are reported back to the Traige server, and any suspicious hits can be investigated remotely using a verity of standard digital forensic investigation tools.

Works cited:

  1. Triage. In: Dorland’s medical dictionary for health consumers; n.d.
  2. Koopmans M. The art of triage with (g)PXE. Dublin: University College Dublin; 2010. p.51

For more information, please see:

Koopmans, M. B., & James, J. I. (2013). Automated network triage. Digital Investigation, 1–9. doi:10.1016/j.diin.2013.03.002

Mar 192013
 

Earlier this year, researchers from the Digital Forensic Investigation Research Group had a chapter published in the book “Cybercrime and Cloud Forensics: Applications for Investigation Processes“.  There were contributions from authors discussing practical as well as theoretical aspects of digital crime, investigation, side channel attacks, law, international cooperation, and the future of crime and Cloud computing environments.

Lock KeyDigitalFIRE specifically focused on how Cloud computing is likely to affect current digital forensic investigators. Instead of assuming that Cloud environments will completely revolutionize the way crime and digital investigations are conducted, we assessed Cloud environments in terms of current digital investigation models. Indeed, new challenges to investigations were found to exist when considering Cloud service models, but many of these challenges stem from increased connectivity and less control. In terms of technology, some challenges exist in Cloud environments that were previously not as common; however, Cloud environments also potentially bring a number of benefits for digital investigators that may ultimately make some types of investigations on Cloud environments easier than on stand-alone systems.

Our chapter aims to be a high-level introduction into the fundamental concepts of both digital forensic investigations and cloud computing for non-experts in one or both areas. Once fundamental concepts are established, we begin to examine Cloud computing security-related questions; specifically how past security challenges are inherited or solved by cloud computing models, as well as new security challenges that are unique to Cloud environments. Next, an analysis is given of the challenges and opportunities Cloud computing brings to digital forensic investigations. Finally, the Integrated Digital Investigation Process model is used as a guide to illustrate considerations and challenges during an investigation involving cloud environments.

James JI, Shosha AF, Gladyshev P. (2013). Digital Forensic Investigation and Cloud Computing. In K. Ruan (Ed.), Cybercrime and Cloud Forensics: Applications for Investigation Processes (pp. 1-41). Hershey, PA: IGI Global. doi:10.4018/978-1-4666-2662-1.ch001 [LINK][PDF]

Feb 272013
 

The concept of signatures is used in many fields, normally for the detection of some sort of pattern. For example, antivirus and network intrusion detection systems sometimes implement signature matching to attempt to differentiate legitimate code or network traffic from malicious data. The principle of these systems that that within a given set of data, malicious data will have some recognizable pattern. If malicious code, for example, has a pattern that is different in some way to non-malicious data, then the malicious data may be able to be differentiated with signature-based methods. In terms of malware, however, signature based methods are becoming less effective as malicious software gains the ability to alter or hide malicious patterns. For example, polymorphic or encrypted code.

This work suggests that signature based methods may also be used to detect patterns or user actions of a digital system. This is based on the principle that computer systems are interactive. This means that when a user interacts with the system, the system is immediately updated. In this work, we analyzed a user’s actions in relation to timestamp updates on the system.

During experimentation, we found that timestamps on a system may be updated for many different reasons. Our work, however, determined that there are at least three major timestamp update patterns given a user action. We define these as Core, Supporting and Shared timestamp update patterns.

Core timestamps are timestamps that are updated each time, and only when, the user action is executed.

Supporting timestamps are timestamps that are updated sometimes, and only when, the user action is executed.

Shared timestamps are timestamps that are shared between multiple user actions. So, for example, the timestamps of a single file might be updated by two different user actions. With shared timestamps it is impossible to determine which action updated the timestamp without more information.

By categorizing timestamps into these three primary categories, we can construct timestamp signatures to detect if and when a user action must have happened. For example, since only one action can update Core timestamps, the time value of the timestamp is approximately the time in which the user action must have taken place.

The same can be said for Supporting timestamps, but we would expect Supporting timestamps values to be at or before the last instance of the user action.

Using this categorization system, and finding associations of timestamps to user actions, user actions in the past can be reconstructed just by using readily available meta-data in a computer system.

For more information, please see our article on this topic:

James, J., P. Gladyshev, and Y. Zhu. (2011) “Signature Based Detection of User Events for Post-Mortem Forensic Analysis”. Digital Forensics and Cyber Crime: Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering. Volume 53, pp 96-109. Springer. [PDF][arXiv:1302.2395]

Oct 042012
 

Malware creators are continually looking for new methods to evade malware detection engines. A popular evasion method is based on malicious code obfuscation that changes the syntax of the code while preserving its execution semantics. If the malware signature relies on the syntactic features of the malicious code, it can be evaded by obfuscation techniques. In this paper, we propose a novel approach to the development of evasion-resistant malware signatures. The idea is that the signature is based on the malware’s execution profile extracted from the OS kernel data structure objects rather than on syntactic information. As a result, the signature is more resistant to malware obfuscation techniques and is more resilient in detecting malicious code variants.

To evaluate the effectiveness of the proposed approach, a prototype signature generation tool called SigGENE was developed. The effectiveness of signatures generated by SigGENE was evaluated using an experimental root kit-simulation tool that employs obfuscation techniques commonly found in rootkits.  In further experiments, different syntactic variants of the same real-world malware have been used to verify the real-world applicability of the proposed approach. The experiments show that the proposed approach is effective not only in generating signatures that detect malware and its variants, but also in producing execution profiles that can be used to characterize different malicious attacks.

Paper [Evasion-Resistant Malware Signature Based on Profiling Kernel Data Structure Objects].

Oct 042012
 

Many existing methods of forensic malware analysis rely on the investigators’ practical experience rather than hard science. This paper presents a formal (i.e. based on mathematics) approach to reconstructing activities of a malicious executable found in a victim’s system during a post-mortem analysis. The behavior of the suspect executable is modeled as a finite state automaton where each state represents behavior that results in an observable modification to the victim’s system. The derived model of the malicious code allows for accurate reasoning and deduction of the occurrence of malicious activities even when anti-forensic methods are employed to disrupt the investigation process.

Paper [Towards Automated Forensic Event Reconstruction of Malicious Code].

Poster [Automated Forensic Event Reconstruction of Malicious Code].

Oct 042012
 

When a malware outbreak happens in an organization, one of the main questions that needs to be investigated is how the malware got in. It is important to get an answer to this question to identify and close the exploited technical and/or human vulnerabilities. This paper proposes a method for malware intrusion path reconstruction in a network of computers running Microsoft Windows. The method is based on the analysis of Windows Restore Points from the compromised computers.  The idea is that malware infection traces from different computers can be correlated in time to identify the progress of the malware through the network and to identify the likely initial point of infection.

A simulated case study is given that demonstrates the viability of the proposed attack path reconstruction technique.

[A Novel methodology for Malware Intrusion Attack Path Reconstruction Paper].

Sep 132012
 

When conducting an investigation, many statements are given by witnesses and suspects. A “witness” could be considered as anything that provides information about the occurrence of an event. While a witness may traditionally be a human, a digital device – such as a computer or cell phone – could also help to provide information about an event. Once a witness provides a statement, the investigator needs to evaluate the level of trust he or she places in the validity of the statement. For example, a statement from a witness that is known to lie may be considered less trustworthy Similarly, in the digital realm, information gathered from a device may be less trustworthy if the device has been known to be compromised by a hacker or virus.

When an investigator gets statements from witnesses, the investigator can then begin to restrict possibilities of happened events based on the information. For example, if a trustworthy witness says she saw a specific suspect at a specific time, and the suspect claims to be out of the country at that time, these are conflicting statements. A witness statement may not be true for a number of reasons, but the statement may be highly probable. At a minimum when conflicting statements occur, these indicate that one or both statements should be investigated further to find either inculpatory or exculpatory evidence.

If an action happens that affects a computer system, observation of the affected data in the system could be used a evidence to reduce the possible states the system could have been in before it’s current state. Taking this further, if we create a complete model of a system then without any restriction on the model, any state of the system could possibly be reachable.

Computer systems can be modeled as finite state automata (FSA). In this model, each state of the system is represented as a state in the FSA. The set of all states is defined as Q. Each action that alters the state of the system can be represented as a symbol in the alphabet (Σ) of the automaton. Moving from one state to another is controlled by a transition function δ where δ: Q × Σ → Q.

In the case of an investigation of a computer system, the investigator may be able to directly observe only the final state of the system. The set of final, or accepting, states is defined as F, where F ⊆ Q. The start state (q0, where q0∈ Q) is likely to be unobservable, and may be unknown. Because of this, any state in the model may potentially be a start state. To account for this, a generic start state g, where  g ∉ Q, can be defined. g is a generic start state with a tradition to each state in Q on each input leading to that particular state. The result of this process is a model of the system that allows for any possible transitions in the system that result in the observed final state from any starting state.

This FSA of the system can then be used to test statements about interactions with the system. As a very basic example, consider an FSA with only two states. The first state is the system before a program is ran, and no prefetch entry has been created (!PrefetchX). The second state is after a program has been ran, and a prefetch entry has been created (PrefetchX). The transition symbol is defined as “Run_Program_X”. The FSA can be visiualized as:

(!PrefetchX) -> Run_Program_X -> (PrefetchX)

For the sake of this example, it is known that a prefetch entry will not be created unless a program is ran, so the start state is defined as (!PrefetchX). An investigator observes that in the final state of the system PrefetchX did exist, so the final accepting state is (PrefetchX).

A suspect who normally uses the system is asked whether they executed Program X, and she claims she did not. Her statement may then also be modeled in terms of the previous FSA, where any transition is allowed except “Run_Program_X”. Her statement can be visualized as:

(*) -> !Run_Program_X -> (*)

In this statement, she is claiming that any state and transition is possible except for “Run_Program_X”.

When both the system and the suspect’s statement are modeled, the FSA can be intersected to determine if the final observed state of the system is reachable with the restrictions the suspect statement places on the model. In the given example, the only possible transition to get to the observed final state is Run_Program_X. If the system model were intersected with the suspect’s statement, the final state (PrefetchX) would not be reachable because the transition that leads to the final state would not be possible. In this case, the suspect statement is inconsistent with the observed final state, and should therefore be investigated further.

This very simple example can be applied to more complex situations and models; however, a challenge with using a computational approach to model real-world systems is a very large state-space to model even for relatively simple systems.

For a more in-depth explanation, please see Analysis of Evidence Using Formal Event Reconstruction.

[1] James, J., P. Gladyshev, M.T. Abdullah, Y. Zhu. (2010) “Analysis of Evidence Using Formal Event Reconstruction.” Digital Forensics and Cyber Crime 31: 85-98. [PDF][arXiv:1302.2308]