Für neue Autoren:
kostenlos, einfach und schnell
Für bereits registrierte Autoren
Studienarbeit, 2012
173 Seiten, Note: 5.5
Acknowledgements
1 Introduction
2 Theoretical Background
2.1 Frequency Domain
2.1.1 Discrete Fourier Transform (DFT) and Inverse Discrete Fourier Transform (IDFT)
2.2 Filters in the Frequency Domain
2.2.1 Convolution theorem and transfer function
2.2.2 Amplitude, Phase and Time-Shift Functions
2.2.3 Symmetric Filters
2.2.3.1 Low-Pass Filter
2.2.3.2 High-Pass Filter
2.2.3.3 Band-Pass Filter
2.2.4 Advanced Filters
2.2.4.1 MA(q)-Filter
2.2.4.2 AR(p)-Filter
2.2.4.3 ARMA(p,q)-Filter
2.3 Direct Filter Approach (DFA)
2.3.1 Optimization Criterion: MMSFE
2.3.2 Decomposition of the MMSFE
2.4 Real-time detection of turning points using DFA
2.4.1 Improving Speed
2.4.2 Reconciling Speed and Reliability
2.4.3 Level and Time Delay Constraints
2.5 Performance Measurement
2.5.1 Drawdown and Maximum Drawdown
2.5.2 R-squared
2.5.3 Sharpe Ratio
2.5.4 Return on Investment
3 Experiment Design and Empirical Results
3.1 A Brief Analysis of the S&P 500 Index
3.1.1 Frequential Analysis
3.1.1.1 Log-returns
3.1.1.2 Filter selection based on the Periodogram
3.1.2 Application of the Direct Filter Approach (DFA)
3.1.2.1 In-sample Analysis
3.1.2.1.1 Coefficient estimation
3.1.2.2 Out-of-sample Analysis
3.1.2.2.1 Methods
3.1.3 Turning Points Identification
3.1.3.1 Method Description
3.1.3.2 Analysis
3.1.4 A Brief Analysis of the CBOE Volatility Index (VIX)
3.1.4.1 Coefficient Estimation
3.1.4.2 VIX as strategic extension for the S&P
3.2 A Brief Analysis of the EURO STOXX 50 Index
3.2.1 Analysis
3.2.2 A Brief Analysis of the EURO STOXX 50 Volatility Index (VSTOXX)
3.2.2.1 Analysis
3.2.2.2 VSTOXX as strategic extension for the EURO STOXX
3.3 Strategy Development
3.4 Modeling enhancement
3.4.1 A Brief Analysis of the exchange rate (EURO/US-$)
3.4.1.1 Analysis
3.4.2 Relationship and link between S&P and EURO STOXX
4 Final Results
5 Conclusion
Bibliography
List of Equations
List of Figures
List of Tables
List of Abbreviation
Listings
A Appendices I
A.1 Convolution Theorem Proof.
A.2 Inverse Fourier Transform Proof
A.3 DFT S&P 500
A.4 In sample Analysis S&P 500
A.4.1 Amplitude and time delay S&P 500
A.4.2 Time series S&P 500
A.5 Methods out-of-sample: based on the year 2011, S&P 500
A.6 Turning Points out-of-sample S&P 500
A.6.1 Turning Points Identification series
A.6.2 Turning Points progression
A.6.3 Turning Points cash strategy without costs
A.6.4 Turning Points cash strategy with costs
A.7 Strategy Development S&P 500
A.7.1 Strategy Development: minimum holding period (MHP)
A.8 In sample Analysis
A.8.1 Amplitude and time delay
A.8.2 Time series
A.9 Out of sample Analysis
A.9.1 Time series
A.9.2 Progression
A.10 In sample Analysis EURO STOXX 50.
A.10.1 Amplitude and time delay EURO STOXX 50
A.10.2 Time series EURO STOXX 50..
A.11 Turning Points out-of-sample EURO STOXX 50
A.11.1 Turning Points Identification series
A.11.2 Turning Points progression
A.11.3 Turning Points cash strategy without costs
A.11.4 Turning Points cash strategy with costs
A.12 Strategy Development EURO STOXX 50
A.12.1 Strategy Development: minimum holding period (MHP)
A.13 In sample Analysis VSTOXX
A.13.1 Amplitude and time delay VSTOXX
A.13.2 Time series VSTOXX
A.14 Out of sample Analysis VSTOXX
A.14.1 Time series
A.14.2 Progression
A.15 In sample Analysis exchange rate
A.15.1 Amplitude and time delay exchange rate
A.15.2 Time series exchange rate
A.16 Out of sample exchange rate
A.16.1 Time series exchange rate
A.16.2 Progression exchange rate
A.17 Final Results
As a consequence of the recent financial crisis, institutions are increasingly interested in identi- fying turning points in financial time series. The accurate and early identification of these turning points can result in the optimal exploitation of the invested capital and profit maximization. Most existing methods for the real-time identification of turning points have proved unreliable and therefore the need to develop a cutting-edge model. The DFA methodology of Prof. Dr. Marc Wildi is one promising real-time procedure that seeks to solve this problem. The purpose of this thesis is the evaluation and comparison of different variants of the DFA pro- cedure in order to find a method for the effective identification of turning points in important financial time series, such as the S&P 500 and the EUROSTOXX 50 and their implied volatility indices (VIX and VSTOXX, resp.). Further, this thesis aims to develop a suitable investment strategy based on the obtained results.
For the purpose of this thesis, the time series mentioned above were analyzed between the years 1990 and 2011, using the last year as out-of-sample data. Frequential analysis using Fourier transforms as well as different variants of the DFA-algorithm were applied in order to identify the desired turning points.
The results obtained from these analyses of the S&P 500 and EUROSTOXX 50 time series show a considerable out-of-sample investment return which verifies the validity of the model. On a second level of analysis, using the implied volatility indices it was possible to generalize the model and thereby verify the initial results. Moreover, with the help of the development of further investment strategies it was possible to normalize profit returns, maintaining a semi-constant growth, which is usually preferred by financial institutions. Finally, given the structural similarities of the two main financial series examined, whose clear profile was only observable using the DFA system, it was possible to combine both time series using the daily exchange rate as a cyclical and structural catalyst, thus achieving a deeper thrust of the model.
This all was possible by highlighting the flexibility of the DFA model for real-time analysis of financial time series and its practical application as a tool for investment analysis. Therefore, the DFA Modell enables an accurate real-time identification of tuning points in financial series.
Keywords: Direct Filter Approach (DFA), Frequential Analysis, Algorithmic Trading, Turning Points, S&P 500, EURO STOXX 50, Exchange Rate, Forecasting Methods, ARIMA, Minimum Holding Period (MHP), Fourier Transform, R-squared, Sharpe Ratio, Profit Maximization
- Zurich University of Applied Sciences. Corresponding Author: ramirfel@gmail.com ∗∗ Zurich University of Applied Sciences.
This research project would not have been possible without the support and guidance of numerous individuals. The authors wish to express their gratitude to their supervisor, Prof. Dr. Marc Wildi who was abundantly helpful and offered invaluable assistance, support and guidance. Without his innovative work in the fields of real-time signal extraction and forecasting this thesis would not have been possible. Our deepest gratitude also goes out to Corepoint Capital AG and Simon Otziger for providing not only an interesting subject of study but also the necessary data for the analysis. Without their valuable input and constructive criticism the resulting thesis would not have been successfully completed. Also, we would like to thank the Zurich University of Applied Sciences for supplying the necessary facilities for this project. Last but not least, we would like to express our gratitude to our families and friends for their support and understanding during the many hours spent away from them working on the project.
Reliable signal extraction and turning point identification are crucial for daily business of financial institutions. Accurate and early detection of trend changes puts such institutions at advantage over their competitors and could result in the determining cause of their market survival or demise. Different signal extraction methods, such as SARIMA or ARCH/GARCH, come into consideration for turning point detection. However, most existing methods for the real-time identification of turning points have proved unreliable and therefore the need to develop a cutting-edge model.
This thesis focuses on the less know Direct Filter Approach (DFA) developed by Prof. Dr. Marc Wildi. Said method is based on the frequential analysis of time series and has been specially designed for real-time purposes. As shown by the results of NN3- and NN5-forecasting competitions^{25}, this pro- cedure outperforms even the most popular and state-of-the-art model based approaches such as the renowned artificial Neural Networks method. The main advantage of the Direct Filter Approach lies on its flexibility and practical use, allowing the user to customize the model in order to extract and identify specific characteristics. During the course of the analysis, the DFA will be applied on the Standard & Poor’s 500 Index (S&P 500) with its implied volatility index (VIX), the Euro Zone 50 Index (EURO STOXX 50) also with its implied volatility index (VSTOXX) and the Euro/Dollar Exchange Rate (EU- RO/USD).
The primary objective of this thesis is to invest as profitable as possible in the S&P 500 and the EURO STOXX 50 futures series using different configurations of the DFA procedure with and without transaction costs. Further, this thesis aims to develop a suitable investment strategy based on the results obtained from the turning point identification. The results provide a basis in order to gain new insight for further performance improvements.
This paper is structured in four chapters. First, the necessary theoretical background with the most important DFA formulae is introduced in order to understand and implement the proposed procedures. Second, the experiment design and empirical results are presented and discussed with respect to profit optimization. Third, the final optimized results are summarized in order to present and evaluate to what extend the primary objectives were reached. Finally, the fourth chapter covers the conclusions drawn from the obtained results and a brief overview of possible improvements.
The following sections briefly introduce the important formulas of the DFA based on [24, cf. Wildi 2008, 11-261]. A detailed theoretical background as well as all the statistical and mathematical definitions may be found in the aforementioned reference.
The frequency domain is the domain for analysis of mathematical functions or signals with respect to frequency. The traditional analysis of functions is based in the time domain, showing the evolution of a specific signal over time. In the frequency domain, on the other hand, signals are expressed in their frequency components according to their variations over a definite range. A frequency representation usually includes information about the phase shift applied to each frequency in order to recombine the frequency components and recover the original signal. The frequency domain is related to the Fourier series, which allows the decomposition of a periodic signal in a finite or infinite number of frequencies. The frequency domain, in case of non-periodic signals, is directly related to the Fourier Transform.
In mathematics, the discrete Fourier Transform (or abbreviated, DFT) is a kind of discrete Transform used in Fourier analysis. It transforms a mathematical function into another, obtaining a representation in the frequency-domain from the original time-domain input. Literature such as [14, cf. Lyons 2011, 59-123], [22, cf. Vaseghi 2008, 272-278], [5, cf. Chaparro 2011, 299-349], [12, cf. Kreyszig 2011, 522-533] or [21, cf. Shin and Hammond 2008, 17-169] discuss the importance and meaning of the DFT which will be summarized in the next paragraphs. It is important to mention that the DFT requires an input function within a discrete sequence of finite duration. Such sequences are usually generated from sampling a continuous function, such as the human voice. Unlike the Fourier Transform in discrete time (DTFT), this transformation only evaluates enough frequency components to reconstruct the finite segment being analysed. Using the DFT implies that the segment being analysed is a single period of a periodic signal that extends to infinity (if this is not met, a window should be used to reduce spurious spectrum). For the same reason, the inverse DFT (IDFT) cannot reconvert a full time domain, unless the input is perfect periodic. For these reasons, it is said that the DFT is a Fourier transform for analysis of discrete time signals and finite domain. Sinusoidal basis functions that arise from the decomposition have the same properties.
The entrance of the DFT is a finite sequence of real or complex numbers, so it is ideal for processing information stored in digital media. In particular, the DFT is commonly used in digital signal processing and related fields dedicated to analyse the frequencies contained in a sampled signal, also for solving partial differential equations, and to perform operations such as convolution or multiplication of large integers. A very important factor for this type of application is that the DFT can be computed efficiently in practice using the algorithm of the Fast Fourier Transform (or abbreviated FFT).
As explained by [24, Wildi 2004, 45]
- 1. Definition Define the Discrete Fourier Transform of X 1 , . . . , X N by:
illustration not visible in this excerpt
2.2. FILTERS IN THE FREQUENCY DOMAIN
where N is sample length; ω k, the base components; and Xt, the input signal.
- 2. Definition Let X t be a finite sequence of length N and let be the Discrete Fourier Transform of X t . Then the Inverse Discrete Fourier Transform is defined by:
illustration not visible in this excerpt
where are the correspondent weights. Proof. See Appendix A.2
Time series are often defined and implemented in the time domain. In order to explain the effect of a filter over a time series it is necessary to define and transform them in the frequency domain. This chapter focuses on the most important mathematical definitions in the frequency domain, namely, the transfer function, amplitude function, phase and time shift functions. Some simple and advanced filters are also presented and defined.
Filters are used to separate different components of a time series. The convolution theorem, one of the most important definitions of signal processing in the frequency domain, explains the effects of the transformation from the time domain to the frequency domain as well as the consequences of applying a filter.
According to [10, Goswami 2011, 39] the convolution of two functions f 1(t) and f 2(t) is defined by:
illustration not visible in this excerpt
Proof. See Appendix A.1
If y(t) is a linear combination of x(t),
illustration not visible in this excerpt
A sequence y k of square-summable numbers is called a filter. Therefore the following general transfer function defines said filter by [15, Manolakis et al. 2011, 17]:
illustration not visible in this excerpt
The transfer function is the Fourier transform of a filter and its corresponding weights, therefore describing a filter in full effect. The concepts filter and transfer function will be used interchangeably as suggested by [24, Wildi 2008, 30]. A filter Γ(ω) is called real if y k ∈ R for all k and it is called symmetric if y k = y k − k for all k. If Γ(ω) is symmetric and real then Γ(ω) ∈ R for all existent values.
Amplitude, phase, and time-shift functions of a filter, derived from the transfer function are defined by:
illustration not visible in this excerpt
The amplitude may be interpreted as the weight applied over a signal. The amplitude function amplifies the signal if A (ω) > 1 and dampens it if A (ω) < 1. If A (ω) = 1, the signal remains unaltered, and if A ω = 0 it is completely erased. Section 2.2 in [24, Wildi, 2008] illustrates with the help of examples how the time shift function translates the signal onto the time domain.
2.2.3.1. Low-Pass Filter
The low-pass filter excludes high frequencies while maintaining the lower ones. Defining a specific frequency for the pass- or stopband, for example f 1 = π \ 6, it possible to filter the desired frequencies while excluding the undesired ones. In this example, all frequencies above this value are filtered out. The general transfer function of this filter would then be:
illustration not visible in this excerpt
The high-pass filter is the complete opposite of the low-pass filter. It excludes lower frequencies, as defined by the stopband, while mantaining higher ones. Its general transfer function is defined by:
illustration not visible in this excerpt
The band-pass combines both low- and high pass filters. By defining two specific frequencies, it is possible to extract a desired range to be analyzed. Frequencies outside this range are blocked or substantially weakened. The general transfer function of this filter, related to the two predefined fre- quencies, is defined by:
illustration not visible in this excerpt
It is possible to define more complex and detailed filters depending on their desired characteristics and application. The essential mathematical definitions of the basic advanced filters will be presented in the following chapters [24, cf. Wildi 2008, 29-40]
A general MA(q)-Filter is defined by:
illustration not visible in this excerpt
Therefore, its respective transfer function has the following form:
illustration not visible in this excerpt
A general AR(p)-Filter is defined by:
illustration not visible in this excerpt
Therefore, its respective transfer function has the following form: b 0
illustration not visible in this excerpt
A general ARMA(p,q)-Filter is defined by:
illustration not visible in this excerpt
Therefore, its respective transfer function has the following form:
2.3. Direct Filter Approach (DFA)
The Direct Filter Approach (or abbreviated, DFA) is a methodological variation of the traditional fore- casting methods. These methods tend to emphasize one-step ahead forecasting errors, while the DFA emphasizes the intrinsic filter errors. For that purpose, Wildi suggests an estimation algorithm based on this new optimization criterion. The following sections summarize the most important features of the estimation procedure applied to early detection of turning points based on the work of [24, Wildi 2008].
As in any methodological procedure, the DFA focuses on the minimization of errors based on an opti- mization criterion. The efficiency of such estimate and the most important features will be discussed in this chapter. An extended discussion and demonstration of the optimization criterion can be found on [24, Wildi 2008, 41-47].
An efficient estimate of the mean-square filter error (MMSFE) can be expressed as the following minimization objective:
illustration not visible in this excerpt
Where Γ(ω k)is the estimated transfer function of the real time filter; and Γ(ω k), the theoretical objective transfer function of the symmetric filter.
To simplify the evaluation of the minimization problem, we assume, under the advice given in [24, Wildi 2008, 42], that T is even and the weights ω k are defined by:
illustration not visible in this excerpt
Furthermore I TX, the periodogram of the input signal X t, is expressed as:
illustration not visible in this excerpt
The Periodogram has an essential role for the analysis of time series. It calculates the significance of different frequencies in time-series data to identify any intrinsic periodic signals by transforming the data from time- to frequency-domain using the Fourier transform. It is fairly clear and indeed quite easy verifiable that the periodogram is nothing more than the squared absolute value of the Fourier transform.
The periodogram will be continuously used in the following sections as a tool to analyse frequential singularities, specifically, to uncover seasonal and reiterative events. Moreover, in the case of the DFA error minimization problematic, it will also be used as a weighting function included in the optimization algorithm.
Lastly, it is important to briefly evaluate the effectiveness of this error minimization procedure in practice, where a finite number of observations is the norm. As stated by [24, Wildi 2008, 44]:
“The name Direct Filter Approach is self-explanatory since filter coefficients are determined directly by minimizing an efficient estimate of the expected squared filter error. The inter- pretation of 2.18 is straightforward: Γ(ω k) should be ‘close’ to Γ(ω k) for frequency compo- nents dominating the spectrum of the process X t. Now this ‘local’ fit is immunized against random errors of the periodogram by imposing regularity assumptions upon parameters of the real-time filter Γ(.)[...] Therefore, the DFA concurrent filter is the result of a carefully designed weighted optimization problem stated in the frequency domain.”
Then again, the validity of the structural model-based assumptions can be verified both theoretically and empirically (as shown by the publications of Prof. Dr. Marc Wildi [25, cf. Wildi 2012]
The squared filter approximation can be decomposed into amplitude and phase errors [24, cf. Wildi 2008, 46-47] . This decomposition will play a fundamental role in the following chapters for the detection of turning points due to the fact that it allows the readjustment of both time delay and level approximation of output filters.
Let Γ(ω) and Γ(ω) be the transfer functions of the objective and estimated filters, respectively. Then the following identity holds for any general transfer function:
illustration not visible in this excerpt
Assuming that Γ(ω) is symmetric, then Φ(ω) ≡ 0. Lastly, readjusting equation 2.18 under assumption of symmetric transfer functions and the decomposition proposed in equation 2.21:
illustration not visible in this excerpt
The resulting readjustment allows the separation of level and time delay errors. The first summand corresponds to the estimated filter error related to the amplitude function of the real time filter. Analogously, the second summand corresponds to error related to the phase function (time shift) of the respective real time filter.
The proposed decomposition enables the independent correction of filter errors related to time delay and level, thus allowing the user to surgically adjust the filter outputs according to his or her needs.
The first step towards developing an appropiate model for the identification of turning points involves clarifying a couple of definitions. A turning point is defined in mathematics as a local maxima or a minima on a curve or function, specifically, where the first derivative equals zero. In economics, on the other hand, a turning point describes an event that results in a significant change in the progress of a company, industry, sector, economy or geopolitical situation [20, cf. Sharpe et al 1995, 517-525]. For the purpose of this thesis, specifically for the development of an appropiate model, the mathematical definition will be used since it allows the differentiation of both maxima and minima.
Going further with the analysis, since the main focus is centered on identifying the derivative of the trend growth, the next step is developing a corresponding estimate. By generalizing equation 2.18, the desired estimate can be expressed as:
illustration not visible in this excerpt
Compared to equation 2.18 this estimate differs only due to the additional weighting function 1 − e i ω k ^{2}, which strongly emphasizes high-frequency components resulting in better amplitude characteristics of the filter in the stopband. [24, cf. Wildi 2008, 105-106]
In order to improve the speed of the resulting real-time estimates, it is possible to decompose the original definition of filter errors into amplitude and time-shift errors. The most basic definition of squared error based on the transfer function allows the following equality:
illustration not visible in this excerpt
If λ = 1 it can be written as:
illustration not visible in this excerpt
Now we can set 2.23 into 2.25 and obtain a more generalized criterion:
illustration not visible in this excerpt
Increasing λ would emphasize the time shift (delay in the pass band). Different experiments using λ can be found on [24, Wildi 2008, 108-112]
A further generalization of the obtained criterion allows more general weighting schemes W (ω):
illustration not visible in this excerpt
Which is equivalent to:
illustration not visible in this excerpt
Finally, defining W (ω) as follows:
illustration not visible in this excerpt
(2.29)
For expweight zero, the original minimization procedure remains unchanged. If expweight is a positive number, then the stop band [(1 + | ω k | − cutoff)expweight] is emphasized, resulting in "smoother" results due to noise cancellation.
The definitive generalized optimization criterion depends on the correction terms lambda and exp- weight: lambda regulates the time delay while expweight regulates the noise suppression. Neverthe- less, it is important to mention that both correction terms antagonize each other. A smoother curve can only be obtained by sacrificing timeliness and vice versa. Therefore, an optimal combination of both lambda and expweight is ultimately dependent on the desired characteristics to be extracted [24, cf. Wildi 2008, 112].
The idea to develop rectification constraints emerges from the need to modulate trend extraction and seasonal adjustment singularities in practice. Specifically, it is important to address the normalization of filter outputs concerning amplitude and reliability in frequency zero.
Starting with the level (or reliability) correction, the following simple constraint can be included in the original DFA minimization problem:
illustration not visible in this excerpt
This particular restriction can be obtained very simply by normalizing the one-sided filter and is quite useful since the output of the real-time filter matches the level of the symmetric one, allowing a starting standardized result. It is important to mention that from now on this particular condition will be named i1. This convention is based on the practical application of the constraint in the programming language, where it is possible to define a boolean variable (a binary logical variable with either TRUE or FALSE values) inside the optimization algorithm.
Another important constraint to include on the original DFA minimization problem is the “timing” of the filter output (namely, the time shift):
This constraint enables the “synchronization” of in- and output signals at frequency zero, allowing a “faster” identification of the outputs desired features.
Both constraints can be quite useful in practice depending on the properties of the input-signal to be emphasized. Such adjustments result in more accurate and reliable outcomes and can be generalized based on their intrinsic attributes. As stated by [24, Wildi 2008, 55]:
“[. . . ] they [the filter constraints] correspond to a generalization of traditional unit root tests, accounting simultaneously for one- and multi-step ahead forecasts as well as the signal defined by Γ(.)”
As well as the first constraint, this restriction will be given a different name from now on, namely i2.
The Oxford Business Dictionary defines performance as
“(...) the accomplishment of a given task measured against preset known standards of accuracy, completeness, cost, and speed”. [18, Parkinson 2006, 495]
Performance measures describe how efficient a financial model in reality is. By quantifying various key indicators it is not only possible to assess the validity and effectiveness of a model but also allows comparisons between different plausible alternatives on a numerical basis. The following section briefly presents the performance measures used in course of this thesis to analyse and compare different model possibilities.
Taking any underlying asset whose price process at time t is given by S t. For example, the price process could be a stock price, index,
illustration not visible in this excerpt
Drawdown D t is defined as the drop of the asset price from its running maximum:
D t = M t − S t (2.33)
illustration not visible in this excerpt
u ∈ [0 ,t ]
Assuming an investor who enters the market at a certain point and leaves it at some following point within a given time period. Maximum drawdown measures the worst loss of such an investor, meaning that he buys the asset at a local maximum and sells it at the subsequent lowest point, and this drop is the largest in the given time period. This represents the worst period for holding this asset and could be written mathematically as:
illustration not visible in this excerpt
[11, cf. Kirkpatrick and Dahlquist 2011, 37-38]
The coefficient of determination R ^{2} is used in the context of statistical models whose main purpose is the prediction of future outcomes on the basis of other related information. It is the proportion of variability in a data set that is accounted for by the statistical model. It provides a measure of how well future outcomes are likely to be predicted by the model.
A data set has values y i, each of which has an associated modeled value f 1. Here, the values y i are called the observed values and the modeled values f 1 are sometimes called the predicted values. The ’variability’ of the data set is measured through different sum of squares.
The total sum of squares (proportional to the sample variance):
illustration not visible in this excerpt
The sum of squares of residuals (or residual sum of squares):
illustration not visible in this excerpt
Then, the most general definition of the coefficient of definition is: SS err
illustration not visible in this excerpt
R-squared not only defines how well forecast performs but is also closely related to the financial performance of an investment. The maximization of the obtained r-squared values of a model results in the highest possible constant returns.
[1, cf. Alexander 2008, 343-346]
One of the most commonly cited statistics in financial analysis is the Sharpe ratio, the ratio of the excess expected return of an investment to its return volatility or standard deviation. Let R t denote the one-period simple return of a portfolio between dates t-1 and t and denote by λ and σ ^{2} its mean and variance:
illustration not visible in this excerpt
illustration not visible in this excerpt
Recall that the Sharpe ratio (SR) is defined as the ratio of the excess expected return to the standard deviation of return:
illustration not visible in this excerpt
Where the excess expected return is usually computed relative to the risk-free rate, R f. Because μ and σ ^{2} the population moments of the distribution of R t, they are unobservable and must be estimated using historical data.
Given a sample of historical returns (R 1 , R 2 , ..., R T), the standard estimators for these moments are the sample mean and variance:
illustration not visible in this excerpt
From which the estimator of the Sharpe ratio follows immediately:
illustration not visible in this excerpt
[26, cf. Zopounidis et al. 2010, 245-262]
(2.44)
For the practical purposes of this thesis, the risk-free instrument R f has been defined at two percent, which corresponds to real risk-free market values
Return on Investment (abbreviated, ROI) is a performance measure used to evaluate the efficiency of an investment or compare the efficiency of a number of different investments. To calculate ROI, the return on an investment is divided by the cost of the investment.
illustration not visible in this excerpt
The result is expressed as a percentage or a ratio. Return on investment is a popular metric because it is versatile and simple to use. If an investment does not have a positive ROI or if there are alternative investment opportunities with a higher ROI, the investment should not be undertaken. A point of criticism is that the risks of the investments are not considered.
[2, cf. Aliaga 2002, 145-148]
The purpose of this chapter is to explain the experimental design step by step, from the starting optimization procedures to the resulting investment strategy. The focus will be centered on the development of an optimal filter (accounting for both speed and reliability) for the detection of turning points. Different methodologies for in and out-of-sample analysis will also be evaluated in order to achieve the mentioned goal. Additionally, the proposed investment strategies will prioritize quasi constant profits instead of maximize yield during the span of the real-time procedure.
Standard and Poor’s 500 Index is a capitalization-weighted index of 500 stocks. The index is designed to measure performance of the broad domestic economy through changes in the aggregate market value of 500 stocks representing all major industries [3, cf. Bean 2010, 9-10]. Corepoint Capital AG provided the daily historical data of the S&P 500 from year 1990 until 2011 for the purpose of this analysis.
illustration not visible in this excerpt
Figure 3.1.: S&P 500 Index between 2005 - 2011
The periodogram introduced in section 2.3.1 will be the basic tool used in order to uncover seasonal and reiterative events of the S&P series. The type and length of filter applied in the optimization procedure is highly dependent on the frequential structures found in the periodogram. Therefore, is advisable to take some time to analyze not only the structures but also the length (in years) of the potential input data for an appropriate methodology and filter definition. This choice will play a basic role in the resulting out-of-sample output and should be accordingly approached.
It is customary to work with the log returns in financial time series. Empirical tests, nevertheless, have shown that the best possible results are achieved by using the untransformed data, ergo without using log-returns. Therefore, all future analyses will be done using the original untransformed data.
After analyzing the 21 years of data available it was obvious that a reduction had to be made. For the purposes of this thesis the last available year, 2011, was chosen as the out-of-sample data to apply the resulting model. This choice is based on the desire to replicate real circumstances, where only historical data is available for future strategy development.
For the choice of the in-sample data, solely the year 2010 was chosen. Intuitively, this choice makes automatically sense since the last obtained data can with higher probability represent the future. Then again it could have been possible to choose more than one year for the analysis (e.g. 2009-2010 or 2008-2010) but first empirical tests verify that better results are obtained by using only the previously mentioned year.
The following graphic shows the basic periodogram and log-periodogram (both based on the Year 2010):
illustration not visible in this excerpt
Figure 3.2.: The Basic- and Log-Periodogram of the S&P 500 time series in 2010.
The massive spectral power at low frequencies makes the basic periodogram diagnostically inconclusive. On the other hand, the logarithmized periodogram of the power spectrum allows a better illustration of the resulting frequencies. Since the focus is placed on identifying the derivative of the trend, ergo the low frequencies of the periodogram, a low pass filter (see 2.2.3.1 for its definition) was chosen as the most appropiate filter to extract the desired characteristics of the series. The corresponding cut-off of the filter was defined in π / 12 in order to extract the desired trend.
illustration not visible in this excerpt
Figure 3.3.: A low pass filter with a cut-off at π / 12 .
The first step towards developing an appropiate DFA Model for the out-of-sample data is to analyze the in-sample series and choose the appropiate coefficients for the minimization algorithm. Since only the historical data until the experimental "present" is used to fit an appropiate model, no predictions or assumptions for future should be made. In this step of analysis only the accuracy of the DFA procedure is estimated based on the in-sample definitions.
It is important to mention that "too accurate" in-sample fitting of data should generally be avoided because it can lead to undesired overfitting thus resulting in poor out-of-sample performance. Furthermore, it is also important to consider that a certain time-delay between model and real-time data cannot be avoided (see section [2.3] for more details).
In order to find the optimal coefficients for the DFA analysis it was necessary to empirically test various choices based on the resulting smoothness of curve and the corresponding R ^{2} value. After many attempts it became obvious that better results were obtained by defining i1 and i2 as TRUE, ergo by applying both correction restrictions (see section [2.4.3] for more details). This realization reduced the practical test scope to the variables lambda, expweight and filter length. By continuing the analysis it became also clear that by applying the restriction for the time-delay (viz. i2), the flexibility of the lambda coefficient was dramatically reduced. Therefore, it was possible to define lambda arbitrarily without substantially changing the results. Another realization made through the empirical testing of the coefficient was the fact that the optimal filter length was defined between ten and sixteen, being twelve the optimal result. A corresponding optimal expweight was identifyied in the proximity of two. The resulting optimal coefficients based on the empirical testing were identifyied as: filter length (12), expweight (2), lambda (10), i1(TRUE), i2 (TRUE). The following table summarizes the testing procedure for different ceteris-paribus values in the proximity of the optimal results obtained:
illustration not visible in this excerpt
Table 3.1.: DFA in-sample coefficients with their corresponding performance measures: S&P 500 (Year 2010)
This table summarizes fourteen coefficient possibilities for the in-sample analysis of the S&P 500 series (based on year 2010) with their corresponding performance measures, R ^{2} and Sharpe Ratio (see chapters [2.5.2] and [2.5.3] respectively for more details). All fourteen combinations were ceteris- paribus defined in the vicinity of the optimal values in order to refine the results. Conclusively, the original rough-defined optimal values happened to be the ones with the best results (for the complete empirical analysis with amplitude, time-shift and resulting curve diagnostics, see Appendix A.4))
The next step of analysis involves the out-of-sample testing of the DFA procedure based on the pre- viously obtained in-sample coefficients. As explained in the first pages of this thesis, the whole ex- periment was designed in order to apply the DFA-Model to the most practically relevant year among the available data, ergo year 2011. The in-sample data choice as well as the corresponding analy- sis is based on desire to develop the best possible real-time method for the identification of turning points. Therefore, besides these choices, an appropiate out-of-sample real-time procedure has to be designed.
In order to test the previous results, three different real-time out-of-sample methods were designed. The first method, window stationary, is the simplest possible choice. This method involves the application of the DFA procedure with its corresponding coefficients over the whole out-of-sample data set. The main disadvantage of this method is that it has no practical significance whatsoever, since it is not really a real-time out-of-sample estimation (for more details on the implementation, see out of sample1 on Listings 5)
The second implemented method, window enlargement, involves the real-time parameter calculation based on a fixed starting point, in this case, the in-sample data of 2010. With every new input of daily data, the data matrix is expanded by one and the output recalculated using the DFA algorithm. The main disadvantage of this method is that it gives too much weight on the past data, which in a sense does not suit the goal of this analysis. Also, this method is numerically demanding since the number of necessary calculations exponentially grows with each daily input (for more details on the implementation, see out of sample2 on Listings 5)
The third and last implemented method, window shift, involves the shifting of a calculation window with predefined length. With every new data input, the fixed-length data matrix is updated, removing the oldest value, shifting the data and adding the newest data input for the daily application of the DFA algorithm. In this case, the window was set to a length of 260 data points, which corresponds to a years daily information. This choice was made through empirical testing of different window lengths in order to find the best representation possible. The choice also makes sense intuitively since seasonality and trend repetitiveness tend to be mostly monthly or quarterly occurences restricted to a specific year. The main advantage of this method lies on its flexibility to assimilate and adapt to new data. If the economic or financial situation dramatically changes, this method is the obvious choice to apply (for more details on the implementation, see out of sample3 on Listings 5)
The following table summarizes the test results of the three described methods
illustration not visible in this excerpt
Table 3.2.: Out-of-sample DFA Methods
Based on these results and the analyses made (see Appendix A.5), the most representative out-of- sample method would be the third, namely window shift. The following plot describes the structure of the results obtained by applying this method iteratively in real-time. The red line divides the in- sample from out-of-sample data, with the various colored lines representing the corressponding real- time results.
illustration not visible in this excerpt
Figure 3.4.: Out-of-sample window shift method: Real-time iteration results.
For practical purposes, all out-of-sample analyses will be based on this method.
Having previously defined the best possible values for the proposed DFA-Model, the next necessary step involves the development of a proper method for the identification of turning points. Turning points, defined as a mathematical construct, are the subject of numerous theorems, therefore allowing a wide range of possibilities to choose from.
The first logical construct that arises for anyone with basic mathematical knowledge, would be the use of derivatives to identify local and absolute maxima and minima. This idea, although quite easy to apply, involves the use of predefined mathematical functions for its implementation. Since the previous results are based on numerical vectors resp. matrices, it would be necessary to interpolate the resulting time series to obtain a corresponding explanatory function. This implies a structural transformation of the data resulting on the unavoidable consequence of information loss. The obtained curves might be “shapelier” but most certainly at the expense of accuracy.
Another option for the identification of turning points is the calculation of log-returns over the data. This method is also quite easy to implement but has also the intrinsic disadvantage of information loss due to numerical rounding errors in the inverse transformation. Although this method will not be used for the identification of turning points, it will serve to analyze the resulting curves by using a so-called “Trend Progression Diagram”.
Finally, the last method to be discussed is based on the primeval definition of punctured neighborhood of functions. Global maxima and minima are usually easy to calculate using integrated software meth- ods (e.g. max, min). The main problem arises when calculating their local counterparts. By using a windowed method, an art of “snapshot” over a predefined moving interval, it is possible to apply said integrated methods with ease thus solving the problematic of local turning points identification. The idea, like the basic mathematical definition of “limit”, is to restrict the calculations over a constraint of definite length which can be arbitrarily expanded or reduced, depending on the user’s needs. One of the main benefits of this method is that it uses the original unchanged data, allowing an accurate representation of the results obtained by the DFA minimization procedure. Also, by allowing the user to define the length of said constraint, it is possible to obtain a useful generalization, which will later result in a valuable strategic tool for continuity (for more information, see MHP on the following chapter 3.3). For practical purposes, this method has been implemented as an R-Function (for more details, see turnplot and peaks on Listings 5).
By working with the optimal values for the DFA coefficients presented on chapter 3.1.2 it is possible to identify the resulting real-time turning points and develop a correspondent investment strategy. The plots presented below correspond to the best performer among all the tested values (ergo, the values in the first row on the in-sample and out-of-sample tables).
The following plot describes the turning point identification using the previously discussed procedures, peaks and turnplot for the optimal DFA coefficients. It is important to notice that both original and filtered data have been normalized (starting point equivalent to 100%) in order to allow a more clear comparison.
illustration not visible in this excerpt
Figure 3.5.: Turning points out-of-sample identification
The green points correspond to the local minima, while the red ones identify the local maxima. By investing on the real curve in each one of these points (red for long, green for cash) it is possible to calculate profits with and without trading fees thus resulting in financially quantifiable results of the proposed DFA Model. Other performance measures such as Return on Investment or Sharpe Ratio were also calculated in order to give a wider impression of the overall performance. The following table summarizes the obtained results:
illustration not visible in this excerpt
Table 3.3.: DFA out-of-sample coefficients with their corresponding performance measures: S&P 500 (Year 2011)
It is important to mention that in order to allow a real application of the developed model, some practical issues have to be addressed. If the methodology is conducted in real-time, then the resulting turning points are indentified by end of the working day. This implies that a trade can only be made the day after the turning point identification. For this reason all the profit calculations implemented in this thesis take this complication into consideration, thus meaning that besides the intrinsic time-delay from the model, all trades are made at point t+1 of the corresponding turning point identification.
Another practical issue to take into consideration is the value of the corresponding trading fees. For this and all future analyses, the trading fees have been defined at one percent of the respective trading values, which is somewhat higher than the values that financial institutions work with. This choice was deliberately made in order to valuate the performance of the investments in a more extreme manner, allowing also a generalization for private investors.
Continuing with the technical analysis, as discussed before, it is also possible to calculate the “Trend Progression Diagram” of the resulting curve. This plot describes not only the turning points location but also the overall structure of the curve (positive slopes mean increments, negative slopes decreases and zeros identify the local turning points).
illustration not visible in this excerpt
Figure 3.6.: Out-of-sample trend progression
The resulting investment strategy on every identifiable real-time turning point can be summarized in the following cash plots. These represent the cumulative profits over time (with and without trading fees) compared to the underlying index.
illustration not visible in this excerpt
Figure 3.7.: Cumulative profits without trading fees
illustration not visible in this excerpt
Figure 3.8.: Cumulative profits with trading fees
Finally, the following plot includes a summarized version of all the cumulative profits with trading fees from the five out-of-sample tested values. (table 3.3)
Figure 3.9.: Cumulative profits of the five out-of-sample possibilities (with trading fees)
The remainder diagrams, corresponding to all the tested values, might be found on Appendix A.6
VIX is the code name of the Chicago Board Options Exchange (CBOE) Market Volatility Index. Differ- ent literature discuss the meaning and implications of the VIX, which will be summarized in the next paragraphs [19, cf. Rhoads 2011, 1-30], [9, cf. Grotewohlt 2011, 3-8], [6, cf. Connors and Alvarez 2009, 52-61]
The VIX is a measure of the implied volatility of options on the underlying index over a 30-day period. It is calculated by taking the weighted average of the implied volatilities of eight OEX call and put options (S&P 500 options).
When there is increasing market volatility, the VIX reaches high values which correlate with declines in the S&P 500, indicating that the markets are dominated by fear and pessimism. This usually coincides with minimum benchmarks which directly imply significant movements in equity markets. On the other hand, when the VIX lies at low values, there is confidence and tranquility on the investors’ side. [16, Meth 2012, 192] comments the structural behavior of the VIX related to strategic change:
“It is speculated that a high value corresponds to a more volatile market and therefore more costly options. These more costly options come from investors that charge a higher premium to insure against someone investing in a big change in the index price to make a maximal profit with minimal effort.”
[13, Krupansky 2012] suggests a helpful categorization of market anxiety based on the corresponding VIX values:
- 5-10 −→ extremely low anxiety, extreme complacency
- 10-15 −→ very low anxiety,high complacency
- 15-20 −→ low anxiety, moderate complacency
- 20-25 −→ moderate anxiety, low complacency
- 25-30 −→ moderately, high anxiety
- 30-35 −→ high anxiety
- 35-40 −→ very high anxiety
- 40-45 −→ extremely high anxiety
- 45-50 −→ near panic
- 50-55 −→ moderate panic
- 55-60 −→ panic
- 60-65 −→ intense panic
- 65+ −→ extreme panic
A good comfortable range to be in is between 18 and 27. That means that the economy is stable and people’s feelings about the market are mainly complacent [4, cf. Carter 2012, 147-148]
Applying the same modus operandi as for the S&P 500 Series, it is again possible to calculate the optimal DFA in-sample coefficients with their respective performance measures. The following table summarizes the obtained results:
illustration not visible in this excerpt
Table 3.4.: DFA in-sample coefficients with their corresponding performance measures: VIX (Year 2010)
Again, the first row of the table corresponds to the best performer out of all the values tested. It is important to point out that the R-squared value of the third row is higher as the suggested one. Al- though these results highly correlate with the original data, the overall preference stays with the first row because of the resulting smoothness of the curve (see Appendix A.8 for a clearer comparison).
The following plot shows the resulting filtered data of the prefered coefficients in real-time. The remainder diagrams, corresponding to all the tested values, including amplitude and time-shift diagrams might be found on Appendix A.9
illustration not visible in this excerpt
Figure 3.10.: DFA in-sample time series compared to the original VIX time series
Furthermore, the preferred out-of-sample methodology (window shift, see chapter 3.1.2.2.1), can also be applied in order to calculate the correspondent performance measures. The following table summarizes the obtained results:
illustration not visible in this excerpt
Table 3.5.: DFA out-of-sample coefficients with their corresponding performance measures: VIX (Year 2011)
Finally, the following plots illustrate the previous out-of-sample results. The remainder diagrams, corresponding to all the tested values, might be found on Appendix A.9.
Figure 3.11.: DFA out-of-sample time series compared to the original VIX time series
illustration not visible in this excerpt
Figure 3.12.: Trend progression out-of-sample
illustration not visible in this excerpt
Figure 3.13.: The S&P 500 Index (top) compared to the CBOE Volatility Index (VIX) (below).
Even though the VIX is quoted as a percentage rather than a dollar amount there are a number of VIXbased derivative instruments in existence. Consequently, it is possible to develop a strategy for the solely purpose of investing in the VIX. Nevertheless, the VIX has a deeper meaning for the purposes of this thesis. Since it is closely related to the S&P 500 and is basically a transformation of S&P 500 values, it can serve as a tool for strategic development.
Recall the definition proposed by [13, Krupansky 2012] discussed on the previous chapter. If the VIX values are understood as a categorization of the markets anxiety, it is also possible to identify with some degree of certainty the position of turning points on the S&P 500 series. Since these were already identified in real-time by using the DFA methodology, we can use the VIX turning point locations as a verification mechanism of the model-based identification. Nevertheless, a predefined value in the VIX, although useful to describe the current market situation, it does not suffice to make a correct strategic decision. An identification procedure for turning points is also necessary in order to correctly assess the risks and manage investments. [3, Bean 2010, 273] explains the problematic making the following recommendation:
“Buy stocks when the VIX begins to turn from an upper extreme. Do not buy based solely on the level of the VIX being at an extreme. Wait for the level to demonstrate a true obstacle for the VIX, then capitalize as it makes a predicted reversal. As the VIX falls, the fear factor also falls, and stocks usually rise.”
The whole idea can be clearly understood by seeing Figure 3.13. Most of the local maxima of the VIX series coincide with local minima on the S&P series and conversely the local minima of VIX coincide with local maxima on the S&P series. The following plot summarizes the use of the VIX index as a verification mechanism out-of-sample (real time).
illustration not visible in this excerpt
Figure 3.14.: Turning points of the S&P 500 Index (top) compared to the CBOE Volatility Index (VIX) (below).
The red lines correspond to the most representative VIX turning points translated to the S&P 500 series. As previously suspected, for the most turning points the minima coincide with the maxima and vice versa. Since the analysis out-of-sample is always done in real-time, the VIX can work as verification mechanism when there are doubts about the turning point location in the original S&P DFA model.
Among all European indices, the EURO STOXX has become one of the leading barometers of Europe. It is calculated as a EURO STOXX 50 price index and a performance index (VSTOXX). The colloquial name EURO STOXX is understood as the price index. EURO STOXX 50 is composed of 50 companies. The EURO STOXX 50 was established on February 26th 1998 and is managed by STOXX Ltd. in Zurich. The leading European index marked a record high on March 2000. After the bursting of the speculative bubble in the technology sector (dotcom bubble) the index fell to a record low on March 2003. In the course of global financial crisis the EURO STOXX 50 began again to decline because of the U.S. real estate crisis in the summer of 2007. From the spring of 2009, the index was back on the way to the top. The Index grew 69.5 percent until February 2011 [8, cf. Grant 2011, 8] .
illustration not visible in this excerpt
Figure 3.15.: EURO STOXX 50 Index between 2005 - 2011
Analogously to the S&P 500 Index the optimal coefficients for the DFA procedure in the case of the EURO STOXX 50 have to be empirically estimated (see chapter 3.1.2.1 for more details on the methodological approach).
illustration not visible in this excerpt
Table 3.6.: DFA in-sample coefficients with their corresponding performance measures: EUROSTOXX 50 (Year 2010)
The selection and evaluation of the parameters follow a similar pattern to the S&P 500 (see chapter 3.1.2.1.1 for more details). In this case also, the complete empirical analysis with amplitude, time-shift and resulting curve diagnostics might be found on Appendix A.10.
For the out-of-sample results, the same method as before was implemented (see chapter 3.1.2.2.1). The following table summarizes said results.
illustration not visible in this excerpt
Table 3.7.: DFA out-of-sample coefficients with their corresponding performance measures: EUROSTOXX (Year 2011)
It is important to denote that the R ^{2} value of variant number three is slightly higher than the suggested "best performer". Nevertheless, the ROI and corresponding profits with and without trading fees indi- cate that the first variant is in fact the most profitable one, showing the limits of R ^{2} as a performance measure. As discussed before, a high correlation between real and modeled data does not necessarily imply better results. Because the focus of this thesis is the identification of the trend derivative, the resulting modeled curves have to be smoother than the original data in order to allow a distinct iden- tification of the corresponding turning points. Therefore, a descriptive visualization of the results is as important as the respective performance measures in order to assess the validity of the model.
Analogously as before, the corresponding "Trend Progression Diagram" of the resulting curves can also be calculated to allow an overall visualization of the structure.
illustration not visible in this excerpt
Figure 3.16.: Trend progression out-of-sample
The resulting investment strategy on every identifiable real-time turning point can be summarized in the following cash plots. As before, these represent the cumulative profits over time (with and without trading fees) compared to the underlying index.
illustration not visible in this excerpt
Figure 3.17.: Cumulative profits without trading fees
Figure 3.18.: Cumulative profits with trading fees
Finally, the following plot includes a summarized version of all the cumulative profits with trading fees from the five out-of-sample tested values (table 3.7).
illustration not visible in this excerpt
Figure 3.19.: The cumultative cash flow of all five out-of-sample possibilities.
The remainder diagrams, corresponding to all the tested values, might be found on Appendix A.11
VSTOXX is the code name of the EURO STOXX 50 implied volatility index. In an analogous manner as the VIX, the VSTOXX is a measure of implied volatility corresponding to the EURO STOXX 50 underlying index [4, cf. Frick et al. 2012, 181-188].
Since both VIX and VSTOXX are calculated using basically equivalent formulae, the previously dis- cussed modus operandi can also be extrapolated to analyze and classify this volatility index (See Chapter 3.1.4).
Following the same schematic as for the VIX, it is possible to calculate the optimal DFA in-sample coefficients with their respective performance measures. The following table summarizes the obtained results.
illustration not visible in this excerpt
Table 3.8.: DFA in-sample coefficients with their corresponding performance measures: VSTOXX (Year 2010)
The following plot shows the resulting filtered data of the prefered coefficients in real-time. The remainder diagrams, corresponding to all the tested values, including amplitude and time-shift diagrams might be found on Appendix A.13.1
illustration not visible in this excerpt
Figure 3.20.: DFA in-sample time series compared to the original VSTOXX time series
Furthermore, the preferred out-of-sample methodology (window shift, see chapter 3.1.2.2.1), can also be applied in order to calculate the corresponding performance measures. The following table summarizes the obtained results:
illustration not visible in this excerpt
Table 3.9.: DFA out-of-sample coefficients with their corresponding performance measures: VSTOXX (Year 2011)
Finally, the following plots illustrate the previous out-of-sample results. The remainder diagrams, corresponding to all the tested values, might be found on Appendix A.14.
illustration not visible in this excerpt
Figure 3.21.: DFA out-of-sample time series compared to the original VSTOXX time series
illustration not visible in this excerpt
Figure 3.22.: Trend progression out-of-sample
3.2.2.2. VSTOXX as strategic extension for the EURO STOXX
Figure 3.23.: The EURO STOXX 50 Index (top) compared to its implied Volatility Index (VSTOXX) (below).
In an analogous manner as the VIX for the S&P series, the VSTOXX can function as a verification mechanism for turning point identification (see the corresponding discussion of this functionality on chapter 3.1.4.1).
The following plot summarizes the use of the VSTOXX index as a verification mechanism out-of-sample (real time).
[...]
Informatik - Angewandte Informatik
Doktorarbeit / Dissertation, 230 Seiten
Doktorarbeit / Dissertation, 122 Seiten
Wissenschaftlicher Aufsatz, 6 Seiten
Bachelorarbeit, 104 Seiten
Hausarbeit, 12 Seiten
Seminararbeit, 16 Seiten
Studienarbeit, 82 Seiten
Informatik - Angewandte Informatik
Doktorarbeit / Dissertation, 230 Seiten
Doktorarbeit / Dissertation, 122 Seiten
Wissenschaftlicher Aufsatz, 6 Seiten
Bachelorarbeit, 104 Seiten
Hausarbeit, 12 Seiten
Seminararbeit, 16 Seiten
Studienarbeit, 82 Seiten
Der GRIN Verlag hat sich seit 1998 auf die Veröffentlichung akademischer eBooks und Bücher spezialisiert. Der GRIN Verlag steht damit als erstes Unternehmen für User Generated Quality Content. Die Verlagsseiten GRIN.com, Hausarbeiten.de und Diplomarbeiten24 bieten für Hochschullehrer, Absolventen und Studenten die ideale Plattform, wissenschaftliche Texte wie Hausarbeiten, Referate, Bachelorarbeiten, Masterarbeiten, Diplomarbeiten, Dissertationen und wissenschaftliche Aufsätze einem breiten Publikum zu präsentieren.
Kostenfreie Veröffentlichung: Hausarbeit, Bachelorarbeit, Diplomarbeit, Dissertation, Masterarbeit, Interpretation oder Referat jetzt veröffentlichen!
Ueli Hartmann
thank you, we did a comprehensive analysis and all listings are available in the paper!
am 25.1.2014
Gast
Awesome research paper! Specially the Code. Everything is automated
am 4.12.2013