Für neue Autoren:
kostenlos, einfach und schnell
Für bereits registrierte Autoren
14 Seiten, Note: 1.3
II. THE CRIME FORECASTING PROCESS
A. Data generation
B. Predictive analytics
III. THE APPLICATION OF PREDICTIVE POLICING
IV. CRITIQUE AND DISCUSSION
V. CONCLUSION AND PROSPECT
The development of new information systems and data mining techniques has made it possible to make predictions of the place, time, victim or perpetrator of a future crime by analyzing past crime reports.
Providing that enough relevant data has been collected before, computational algorithms can be used to find patterns and forecast crimes. Underlying theories make use of criminological findings such as the increased threat to areas already targeted once or to areas close to a victimized neighborhood.
The usage of computers allows for a quicker and more effective analysis as well as the discovery of patterns otherwise not humanly detectable. In order to be effective, forecasts need to be followed by concrete measures. They can be used to plan police operations and specifically deploy forces and resources in realtime.
This paper describes the most important steps of the crime forecasting process.
Index Terms —Crime forecasting, predictive policing, near-repeat theory
A man is arrested for a crime he has not even committed yet. This scene straight out of Philip K. Dick’s famous novel ”Minority Report” could soon become reality with the introduction of computational crime forecasting in everyday policing. Of course no one is going to get prematurely arrested, but police departments and computer scientists are working on ways to forecast felonies and stop them from even happening.
The field of predictive policing applies analytic techniques in order to identify areas or individuals at risk, to then specifically deploy forces and resources and therefore to crime risk or even solve past crimes , . These analytic techniques do not have to be computational, but the usage of computers and algorithms considerably eases data collection, storage and representation. Additionally, computational methods might show patterns that would otherwise not be obvious to a human analyst .
A single felony will always be basically unpredictable, as it does not take place regularly or at a predefined time and place , . However, it is possible to find patterns of crimes related to a specific neighborhood, season or to other characteristics by analyzing past crime reports. Police data shows that criminals respond to opportunities and habit. On the one hand, they are attracted badly protected neighborhoods as well as disorderly areas with already high crime levels, both signaling a good opportunity for committing crimes without being caught. On the other hand, offenders often return to their crime scene or even repeatedly continue to commit crimes within the same area. , 
The goal of predictive policing is to exploit these patterns in order to effectively allocate or adjust patrols and other resources, and therefore to ”transform policing from a reactive process to a proactive process” . Police officers should not be forced to just react to 911 calls, but instead communicate with citizens and create an ordered environment that stops crimes from even happening , .
The rest of the paper is structured as follows: In section II, the analytic steps of the crime forecasting process are presented. Firstly, the generation of data, comprising the collection and representation of relevant information, is described in section II-A. Following this, several predictive methods are introduced in section II-B. Section III gives information on the application of predictive policing and the interventions and operations that ideally follow crime forecasting methods. Eventually, potential pitfalls and points of criticism, like over-reliance on predictive methods or a lack of transparency, are described in section IV. In section V, the paper concludes with a summary of the most important points and an outlook to further developments.
illustration not visible in this excerpt
Fig. 1. The four-step predictive process according to Perry et al. in . Data is being collected and analyzed, then the resulting predictions are used to plan police operations (e.g. increasing resources, adjustment of patrols) accordingly. The criminal response to these interventions creates new data that is being fed into the database anew.
Analyzing data and discovering patterns is not everything in the crime forecasting process. For the predictions to be useful, the specific police department has to act on them. The process can be described as a four-step cycle, as shown in figure 1.
The first two steps are concerned with the classical data treatment: Reams of data of past events, concerning the type of crime, the exact time and place and maybe other useful identifiers, are being fed into a database and analyzed for patterns. The results of these examinations might be used to conduct spot-on interventions in areas at risk or to simply adjust existing patrols. , 
The quality of predictions depends strongly on the quality of the data used to make them . Crime data and maps have to be up date, containing the very latest events , in order to be useful for analysis and forecasting . The collected data is then preprocessed and represented in a model designed to facilitate pattern recognition in the desired resolution.
1) Collecting information:
Most of the data used by computational forecasting systems is also being collected in standard police procedures. Police agencies have been mapping crimes using geographic information systems and adding additional information for a long time, in order to better discover and understand criminal patterns , . The data necessary for predictive analytics is therefore already there and databases can be fed with many years of past events.
The collected information for each event usually contains the type of crime, the location and the time of the incident, sometimes also additional information like attributes of the targeted property or individual . Using this data, seasonal or geographical frequency counts of crimes can be made and utilized to identify hot spots (see II-B1) and potentially forecast crimes .
2) Data representation:
Before any analyzing of the collected information can begin, the data needs to be preprocessed and properly represented. Yu et al.  divided a geographical area into grid cells of at one point one-half, at another point one-quarter mile square and populated these cells with monthly data on occurring crimes. The monthly data is a matrix of six categories of felonies. Mu et al.’s approach  using a fourth-order tensor encoding longitude, latitude, time and other relevant events amounts to a similar outcome, since they also divide the city into grid cells for a more feasible localization.
Choosing the right grid resolution for the respective neighborhood is crucial for the quality of predictions and the success of the overall process. On the one hand, the separate cells have to be small enough to allow the effective deployment of forces and implementation of operations ,  (see also figure 2). On the other hand, adding too much information to a fine-grained grid leads to a model that does not yield usable patterns but says nothing definitive at all . Yu et al.  researched two different resolutions of grid cells and found out that the lower resolution led to better predictive results.
Making predictions based on existing data is a method instinctively used by police officers. However, studies have showed that model-based algorithms are far more accurate in forecasting crimes than traditional police practices .
The reason why predictive methods can work is that crimes such as residential burglaries or car thefts are usually influenced by a number of different factors. These factors can be characteristics of the area, but also of the surrounding landscape . Generally speaking, areas that have already fallen victim to crimes, as well as their adjoining neighborhoods are especially at risk to be targeted (again) . By identifying hot spots of crime occurrence and areas at risk based on their unique characteristics, and then looking at their surrounding neighbors, a lot of reliable predictions can be made , .
1) Hot spot identification:
Hot spot identification is a basic functionality of most crime prediction algorithms. A predefined area is being scanned in order to identify clusters with high levels of crime occurrence, the so-called hot spots . The simple underlying assumption being that ”the hot spots of yesterday are the hot spots of tomorrow” . According to Gorr et al., hot spots usually combine several crime indicators: ”(1) motivated offenders, (2) suitable targets, and (3) the absence of a capable guardian” . By identifying these hot spots and then specifically deploying patrols there, the third of these conditions and subsequently the other two can be eliminated.
The emergence of more developed mapping and visualization technologies has made it easier to track the evolution and displacement of these hot spots and monitor areas of concern outside traditional policing boundaries , . However, in developing algorithms for hot spot prediction, one must consider the prevalence of cold spots in the overall observed area. By weighting cold and hot spots equally, the model would get better trained to recognize areas without crimes than the desired areas at risk. That is to say, the weight of the hot spots needs to be enhanced – despite the overall accuracy sinking - in order to train an algorithm that focuses on hot spot prediction. 
2) Risk-terrain modeling:
The aforementioned hot spot identification method is very much determined by previous crimes piling up and forming a hot spot. Risk terrain modeling is centered more on the interaction of social, physical, geographical or behavioral factors occurring at a specific place. Examples for these indicators are the residential location of individuals already arrested for committing felonies in the past, the proximity of quick escape routes or the demographic concentration of young males.  Also the existence of bus stops, public housing, bars, liquor stores, fast food restaurants and even schools was found to be correlated with violent crimes .
Similarly to the hot spot approach, risk-terrain modeling can also be used to adjust police operations and patrols in order to protect potentially endangered areas.
3) The near-repeat theory:
Near repeat theory postulates the assumption that once a crime has happened at a specific place, this particular location and its surrounding environments are more likely to be subjects to additional, subsequent crimes , . The pattern of crime infecting adjacent areas and therefore spreading through a neighborhood like a contagious disease can be compared to the occurrence of earthquakes. It is well known for initial earthquakes to trigger aftershocks in the surrounding area; this model of ”self-exciting point processes” was modified by Mohler et al. in  for the purpose of crime modeling.
According to Ferguson , data supports the near-repeat phenomenon for property-based crimes such as residential burglaries; two theories have been produced in order to explain this effect. Flag theory follows a hypothesis similar to the assumptions behind risk terrain modeling (see II-B2) and broken windows theory (see II-B4): crimes repeatedly happen at similar places because criminals respond to the same signs, such as disorder, target vulnerability or target attractiveness. According to boost theory, however, the act of committing a crime allows the offender to learn information that enhances the vulnerability of the area, such as burglars breaking into a house and thereby becoming familiar to the weaknesses of neighboring houses as well.
In their exploration of data mining techniques for crime forecasting, Yu et al.  apply the near-repeat theory combined with a usage of temporal data. Employing the so-called ”t-Month Approach”, they describe crimes happening in one month by previous counts of crimes in the months before. Moreover, they try to gain spatial knowledge by employing different classifiers. The baseline is the One Nearest Neighbor algorithm that simply assumes that ”similar circumstances must result in similar outcomes”. Spatially constrained, this algorithm establishes that an area targeted once is more likely to fall victim to a crime again. The authors further use classifiers such as a decision tree, neural networks and naive Bayes, with Bayes yielding the best prediction results with an accuracy between 70 and 80%.
4) The broken windows theory:
The near-repeat theory estimates occurrences of crimes nearby as an indicator for future crimes. The broken windows theory follows a similar direction, but starts even sooner: it claims that even small signs of disorder disturb a community and invite criminals.
That is to say, that mess or disorderliness and crime are closely correlated. The eponymous ”broken windows” have to be instantly repaired, or else further destruction and subsequently the decline of the whole neighborhood will follow.  Applying this effect on forecasting models means not only integrating factual crimes, but also potential indicators of disorder, such as physical deterioration, graffiti or public drinking.
In 2003, Gorr and Harries remarked that until then, crime forecasting was simply not feasible or at least not worth the effort . Since then, the computerized collection of data has become standard in police departments, as well as monthly meetings to discuss strategic planning . This allows the application of predictive algorithms in order to analyze data, forecast crimes and use this information to strategically position patrols. The predictive process is not supposed to change the overall policing methods, but to make them more effective .
Predictive policing can be used to forecast especially endangered areas, individuals or times, as well as people threatening to become offenders . The practice of predictive methods - especially in the United States, the major field for predictive policing - has focused on property crimes such as residential burglary, that seems to be a particularly good case of application. As cited in II-B3, data supports the near-repeat theory, one of the most often applied theories in crime forecasting, in residential burglary . Predictions based on individuals are delicate and may also be more prone to be affected by biases, these issues are being further discussed in chapter IV. Traditional police tactics have often been focusing on individuals and their crimes, forecasting techniques are of little use here, since the can only make predictions based on patterns but not on highly subjective decisions .
As shown in figure 1, one of the four steps in the predictive process is the execution of police operations. According to Perry et al., the inclusion of personal interventions based on analytic findings is crucial to the overall effectiveness and success of predictive policing. They define successful operations as having ”top-level support, sufficient resources, automated systems to provide needed information, and assigned personnel with both the freedom to resolve crime problems and accountability for doing so”, these comprehensive requirements show the necessity of personal actions based on predictions in order to actually decrease crime. 
With the availability of accurate short-term forecasts, police officers are able to allocate patrols in real-time, protecting neighborhoods especially at risk for being targeted, to shift resources between crime prevention and enforcement operations and to schedule vacations or training to trough crime months , , . The identification of hot spots small enough to allow targeted operations facilitates strategical planning (see figure 2).
illustration not visible in this excerpt
Fig. 2. Predictive policing with hot spot identification and risk-terrain modeling can be of great use in strategical planning. By identifying not only the overall distribution of crimes but hot spots small enough to be easily monitored, resources can be deployed very effectively and the sole presence of police officers can prevent crime by creating a deterrence and suppression effect. 
Seminararbeit, 39 Seiten
Seminararbeit, 25 Seiten
Doktorarbeit / Dissertation, 113 Seiten
Magisterarbeit, 76 Seiten
Hausarbeit, 11 Seiten
Seminararbeit, 19 Seiten
Masterarbeit, 94 Seiten
Hausarbeit (Hauptseminar), 18 Seiten
Hausarbeit (Hauptseminar), 28 Seiten
Seminararbeit, 39 Seiten
Magisterarbeit, 76 Seiten
Hausarbeit, 11 Seiten
Seminararbeit, 19 Seiten
Hausarbeit (Hauptseminar), 28 Seiten
Der GRIN Verlag hat sich seit 1998 auf die Veröffentlichung akademischer eBooks und Bücher spezialisiert. Der GRIN Verlag steht damit als erstes Unternehmen für User Generated Quality Content. Die Verlagsseiten GRIN.com, Hausarbeiten.de und Diplomarbeiten24 bieten für Hochschullehrer, Absolventen und Studenten die ideale Plattform, wissenschaftliche Texte wie Hausarbeiten, Referate, Bachelorarbeiten, Masterarbeiten, Diplomarbeiten, Dissertationen und wissenschaftliche Aufsätze einem breiten Publikum zu präsentieren.
Kostenfreie Veröffentlichung: Hausarbeit, Bachelorarbeit, Diplomarbeit, Dissertation, Masterarbeit, Interpretation oder Referat jetzt veröffentlichen!