I’m finding myself with very little enthusiasm to look at 2Q GDP numbers. Part of the reason is of course Ramadan: to many things to do and little energy to do them. But the other reason is that it’s pretty obvious that although by my reckoning we’re out of recession (and the “official” figures will probably show this in 3Q), the nature of the recovery means that it is likely to be weak and unsustainable, and that the risk of a relapse is still very much on the cards.
As covered in my previous post, q-o-q GDP rebounded nicely in 2Q 2009 (seasonally adjusted log quarterly changes, annualized, and log annual changes; 2005=100):
But growth calculations can be misleading – y-o-y is still negative, and for some sectors sharply so, while the q-o-q numbers can be highly volatile. In this instance, I’m more inclined to look at the seasonally adjusted levels, partly to gauge whether the recovery in economic activity is significant, as well as to check on the current level relative to pre-crisis levels. That gives an idea of how much further economic activity has to recover before we can call the economy truly healed i.e. whether we’ve closed the output gap. But first, growth trends on the demand side (seasonally adjusted log quarterly changes, annualized; 2005=100):
Exports are flat, while private consumption, imports and investment are showing improvement. On to the levels (seasonally adjusted; 2005 prices):
I think we can call the green shoots here as real, but they are very, very small. One thing to note (I just noticed this while compiling these charts), is that private consumption appears to have been above its long term trend in 2007-2008. That suggests that private consumption growth will fall back in the coming quarters, and can’t be relied upon to drive recovery.
There’s one further demand side number that needs to be highlighted. 2Q GDP rose by RM5.9 billion in real terms compared to 1Q, and about RM3.9 billion after seasonal adjustment. Looking at the marginal contribution (seasonally adjusted) from each demand component, we find the external sector (exports less imports) at –RM9 billion, Public consumption –RM1 billion, private consumption +RM1.8 billion, and investment +RM0.4 billion. That means there’s about an RM12 billion discrepancy between consolidated real expenditure and GDP – and it comes from changes in inventories:
Here’s what I think happened: In response to the drastic drop in trade beginning in 4Q 2008, manufacturers and exporters savagely cut into their ready stock of finished goods, as well as their on-hand supplies of intermediate goods. This hypothesis accords with anecdotal evidence of work stoppages and reduced capacity utilization in 1Q 2009. As it became clear that the global economy was beginning to recover (and as trade finance began flowing again), this trend reversed causing an increase in output to replenish depleted stocks – which fits with the observation of faster import growth but stagnant export growth. Inventories are still dropping however, which suggests that inventory build-up will continue into 3Q 2009 and possibly 4Q 2009. However, here are the key points:
1. External demand for Malaysian goods continues to be stagnant, in line with the observed meagre improvement in monthly export data.
2. That means, in the absence of a pick-up in exports in 3Q, that once inventories are up to more “normal” levels, output will drop again to the level determined by demand.
3. Ergo, recovery will continue to be weak, with a risk of a further dip if neither external demand or domestic demand improve.
It’s also interesting to speculate on the impact of the government’s stimulus packages. One thing to note here is that public consumption under the national accounts is not synonymous with fiscal deficit spending: to see the impact of the latter we need to know what proportion of investment is being done by the public sector, as well as the higher order impact of increased government expenditure on private consumption and investment. Unfortunately, speculate is all we can do at the moment – but I’m inclined to believe that there has been an impact, and that the impact has been greater than the nominal excess in fiscal expenditure i.e. the multiplier is/was greater than one.
I can’t make that argument for the growth in private consumption – that may have been driven more by psychological factors (a return of Keynes’ “animal spirits”) as much as anything else. As news over the second quarter suggested that the worse was behind us, people may have been more inclined to spend – it was not necessarily due to direct or indirect government action. But in terms of investment, I find it hard to believe that the private sector would maintain much less increase investment in an environment of stagnant external demand and very low capacity utilization – especially since the manufacturing sector appears to have begun the recession with excess capacity in the first place.
Whatever the case may be, it’s clear that the main story here is that the bounce in 2Q GDP was largely inventory driven – which is not sustainable. Which in turn means that fiscal deficit spending will have to continue to play a role in sustaining economic activity until external demand recovers, or until new sources of growth are found. That’s a tall order, as I don’t think the external demand situation will improve much going forward – China’s stimulus is already running out of steam, and the US is in no state (nor has the desire) to regain its position as global consumer of last resort.
We live in interesting times.
Monday, August 31, 2009
Thursday, August 27, 2009
We're Out Of Recession - For Now
Quick post while I'm still absorbing the 2Q 2009 GDP numbers, as well as the concurrent release of the July MSB. As will be reported in the papers today, real GDP fell 3.9% yoy in the second quarter, after a 6.2% drop in 1Q 2009. The log changes are a little worse, at 4.0% and 6.5% (log annual changes; 2005=100):
But using the more standard seasonally adjusted, q-o-q annualised method (see this post for the difference in calculation), 2Q 2009 growth reached a whopping +12.9% (annualised log quarterly changes; 2005=100):
On that basis, barring data revisions, Malaysia went through just a two quarter recession (unofficially) even if the depth of the contraction was fairly deep - the 1Q 2009 figure was -18.1%. Although it's clear we're in the recovery phase, we're not quite out of the woods yet as the level of output is well below trend:
But using the more standard seasonally adjusted, q-o-q annualised method (see this post for the difference in calculation), 2Q 2009 growth reached a whopping +12.9% (annualised log quarterly changes; 2005=100):
On that basis, barring data revisions, Malaysia went through just a two quarter recession (unofficially) even if the depth of the contraction was fairly deep - the 1Q 2009 figure was -18.1%. Although it's clear we're in the recovery phase, we're not quite out of the woods yet as the level of output is well below trend:
Tuesday, August 25, 2009
Gretl Tutorial III: Diagnostics – “He’s dead, Jim”
Gretl Tutorial I
Gretl Tutorial II
Now that we have a decent regression that seemingly outlines a plausible relationship between exports and lagged imports, we have to test from a statistical perspective whether we can make inferences from the resulting equation. That will determine whether we can actually make statistically reliable forecasts. So what are the potential problems we can encounter?
Papering over an academic discussion of the properties of estimators, from a simplistic perspective what we are trying to achieve is a model where coefficient estimates are unbiased, efficient, linear and consistent. This is actually best achieved not by looking at the estimators themselves but by looking at the residuals, which is the difference between the actual data and the estimated relationship. You can get view the residuals in Gretl by looking at the results screen, selecting Graphs->Residual plot->Against time:
A cursory examination of the residuals is revealing. As I noted earlier, there’s a discontinuity between our results and the actual data for the period before 2000. Also there’s a marked seasonal pattern in the residuals, which means seasonality is a factor as well. But I’m leaving that for the next post, which will deal with dummy variables. For now, I want to look at some of the standard diagnostic tests that are typically used to make sure our model is a valid one.
The two main problems with times series deal with the requirement that residuals must be independent and normally distributed:
1. Serial correlation – which means residuals are not independent over time, and are in fact correlated;
2. Heteroscedasticity – which means variance (and standard errors) are not constant, and thus residuals are not normally distributed.
There are a number of standard statistical tests that can be performed to find out whether either of these two conditions hold; if they do, then we have to rethink our model.
The most common method to test for serial correlation is one you don’t have to perform; both Gretl (and EViews) report the Durbin-Watson statistic as part of the estimation results. In the Gretl results screen, it is the last number reported in the right column, with a value of 1.807829. The actual test statistic requires some calculation but essentially a value of 2 means no serial correlation, while a value of 4 or 0 mean perfect positive or negative correlation. As a short cut, in the results screen you can select Tests->Durbin-Watson p-value which gives a p-value of 0.097161, indicating that at the 95% confidence level you cannot reject the null hypothesis of no serial correlation:
Unfortunately the DW stat has a weakness in that it is only a valid test for serial correlation over 1 lag. For multiple lags, you need to use a different test. The Breusch-Godfrey test allows for this and can be reached at Tests->Autocorrelation (I’m using 12 lags here):
As you can see, while the BG test echoes the DW stat for the first lag, there is serial correlation at lags 2, 3, 5, 10, 11, and 12 (check the stars). All the test statistics (at the bottom of the test results page) show very small p-values, indicating a rejection of the null hypothesis of no serial correlation. Note that any tests you do are automatically added to the results page, below the estimation results – a nice touch.
To test for heteroscedasticity, select Tests->heteroskedasticity (sic)->White’s test:
The p-value is 0.000003, which means you have to reject the null hypothesis that there is no heteroscedasticity.
As an additional test, you can directly test whether the residuals are normally distributed (select Tests-> Normality of residual):
The p-value of 0.06723 means that it's a close run thing - you would accept the null hypothesis of normality at the 95% confidence level, but reject it at the 90% confidence level.
Lastly, I almost always test for ARCH (select Tests->ARCH):
Which suggests that ARCH is present as well. ARCH is a special form of heteroscedasticity – it’s an acronym that means auto-regressive conditional heteroscedasticity, and is especially common in financial markets. It’s also a rather fancy way of saying that volatility clumps together, as illustrated by this chart of daily log returns on the KLCI:
Going back to our trade model, we have some very obvious problems. With both heteroscedasticity and serial correlation present, I’d be very wary of relying on the model as it stands. To solve this, we need to relook the model specification – the pattern of residuals provides the main clue. We have to deal with seasonality, and we have to deal with a potential change in the relationship between exports and imports post-2000.
But before moving on, you can save the work you’ve done by selecting File->Save to session as icon, and then saving the session. The next time you start Gretl, select File->Session files->Open Session, then select the name of the session file you saved your work under, and your work will automatically be available through the session window under an icon named “Model 1”:
Gretl Tutorial II
Now that we have a decent regression that seemingly outlines a plausible relationship between exports and lagged imports, we have to test from a statistical perspective whether we can make inferences from the resulting equation. That will determine whether we can actually make statistically reliable forecasts. So what are the potential problems we can encounter?
Papering over an academic discussion of the properties of estimators, from a simplistic perspective what we are trying to achieve is a model where coefficient estimates are unbiased, efficient, linear and consistent. This is actually best achieved not by looking at the estimators themselves but by looking at the residuals, which is the difference between the actual data and the estimated relationship. You can get view the residuals in Gretl by looking at the results screen, selecting Graphs->Residual plot->Against time:
A cursory examination of the residuals is revealing. As I noted earlier, there’s a discontinuity between our results and the actual data for the period before 2000. Also there’s a marked seasonal pattern in the residuals, which means seasonality is a factor as well. But I’m leaving that for the next post, which will deal with dummy variables. For now, I want to look at some of the standard diagnostic tests that are typically used to make sure our model is a valid one.
The two main problems with times series deal with the requirement that residuals must be independent and normally distributed:
1. Serial correlation – which means residuals are not independent over time, and are in fact correlated;
2. Heteroscedasticity – which means variance (and standard errors) are not constant, and thus residuals are not normally distributed.
There are a number of standard statistical tests that can be performed to find out whether either of these two conditions hold; if they do, then we have to rethink our model.
The most common method to test for serial correlation is one you don’t have to perform; both Gretl (and EViews) report the Durbin-Watson statistic as part of the estimation results. In the Gretl results screen, it is the last number reported in the right column, with a value of 1.807829. The actual test statistic requires some calculation but essentially a value of 2 means no serial correlation, while a value of 4 or 0 mean perfect positive or negative correlation. As a short cut, in the results screen you can select Tests->Durbin-Watson p-value which gives a p-value of 0.097161, indicating that at the 95% confidence level you cannot reject the null hypothesis of no serial correlation:
Unfortunately the DW stat has a weakness in that it is only a valid test for serial correlation over 1 lag. For multiple lags, you need to use a different test. The Breusch-Godfrey test allows for this and can be reached at Tests->Autocorrelation (I’m using 12 lags here):
As you can see, while the BG test echoes the DW stat for the first lag, there is serial correlation at lags 2, 3, 5, 10, 11, and 12 (check the stars). All the test statistics (at the bottom of the test results page) show very small p-values, indicating a rejection of the null hypothesis of no serial correlation. Note that any tests you do are automatically added to the results page, below the estimation results – a nice touch.
To test for heteroscedasticity, select Tests->heteroskedasticity (sic)->White’s test:
The p-value is 0.000003, which means you have to reject the null hypothesis that there is no heteroscedasticity.
As an additional test, you can directly test whether the residuals are normally distributed (select Tests-> Normality of residual):
The p-value of 0.06723 means that it's a close run thing - you would accept the null hypothesis of normality at the 95% confidence level, but reject it at the 90% confidence level.
Lastly, I almost always test for ARCH (select Tests->ARCH):
Which suggests that ARCH is present as well. ARCH is a special form of heteroscedasticity – it’s an acronym that means auto-regressive conditional heteroscedasticity, and is especially common in financial markets. It’s also a rather fancy way of saying that volatility clumps together, as illustrated by this chart of daily log returns on the KLCI:
Going back to our trade model, we have some very obvious problems. With both heteroscedasticity and serial correlation present, I’d be very wary of relying on the model as it stands. To solve this, we need to relook the model specification – the pattern of residuals provides the main clue. We have to deal with seasonality, and we have to deal with a potential change in the relationship between exports and imports post-2000.
But before moving on, you can save the work you’ve done by selecting File->Save to session as icon, and then saving the session. The next time you start Gretl, select File->Session files->Open Session, then select the name of the session file you saved your work under, and your work will automatically be available through the session window under an icon named “Model 1”:
Labels:
exports,
external trade,
Gretl,
imports,
seasonal adjustment,
seasonal effects
Tuesday, August 18, 2009
Gretl Tutorial II: OLS Regression
Gretl Tutorial I
Now that we have a dataset to play with, what can we do with it?
I based my simple trade models on the assumption that Malaysian imports and exports are cointegrated i.e. that there is a long term relationship between the two variables.
Intuitively, since 70%+ of imports are composed of intermediate goods, which are goods used as inputs into making other goods (including exports), we would expect a statistically significant relationship between exports and imports. For instance, exporters would have certain expectations of demand (advance orders) and order inputs based on that demand. After processing, the finished goods would then be exported.
In such a case, imports of inputs would lead exports by a time factor, depending on the length of time engaged in processing. This is something we can actually test, but I’ll leave that for later and just assume a lag of 1 month. Since imports are based on expected future export demand, we can then use imports to actually forecast exports.
There are some problems with making such a simplistic assumption of course (e.g. how do we account for exports with no foreign inputs), but since this is a demonstration of regression analysis and not an econometric examination of the structure of Malaysian trade, we’ll ignore it for now. In any case, for forecasting purposes, structure (economically accurate modeling) is less important than a usable and timely forecast.
If you’ve gone through the previous post, you’ll have export and import data loaded into Gretl and ready to go. The first step is to transform the data to natural logs. Select both variables (Ctrl-click), then go to the menubar and click Add->Logs of selected variables:
You’ll now have two additional variables called l_imports and l_exports:
The reasons we transform into natural log form is twofold: most economic time series are characteristically exponential with respect to time, and a log transformation changes the vertical scale to linear. Log transformations also make elasticity calculations easier, as the estimated coefficients are approximate to percentage changes in the variables.
To estimate the regression, click Model->Ordinary least squares…:
…which will give the model specification screen:
Select l_exports then click on the “Choose” button, which sets the log of exports as the dependent variable. Then select l_imports and click the “Add” button, which sets the log of imports as the explanatory variable. At this stage, you should note that the “lags…” button turns from greyed out to normal. Click on this, which brings up the lag screen:
Set the l_imports lag to 1 in both boxes, then click “Ok” and “Ok”.[1] You’ll now get the results window:
Don’t worry if the results window looks complicated, there’s only a few numbers that you really have to deal with...for now. First are the results of the estimation itself:
(l_exports)=-0.551950+1.07273(l_imports(-1))
The interpretation here is that a 1% rise in last month's imports causes a 1.07% rise in this month's exports. To have a look at your work, click Graphs->Fitted, actual plot->Against time, in the results window:
You should see this:
The red line displays actual values of l_exports, while the blue line represents the values from your estimated equation. Note that before 2000 the forecast errors are fairly large compared to after 2000, both under- and over-estimating exports. On the whole however, the results of the equation looks good, and seems to be a fairly accurate forecast model for exports.
Now, that wasn't so hard was it?
But we still have to be sure that this is a statistically significant relationship. Ordinarily, this involves a hypothesis test of the coefficients (null hypothesis=0), which involves using the standard errors to calculate a T-ratio, which is then evaluated against the critical values in a Student's T table.[2]
Since busy people can’t be bothered with stuff like that, Gretl very nicely lets you skip all those steps – all you have to pay attention to is the p-value. I could give a technical explanation for what it is, but all you have to know is that [1-(p-value) x 100] gives the confidence level. So if the p-value is 0.05, then you are 95% confident that the estimated coefficient is statistically significantly different from zero (statistically significant for short).
You can skip even this step, and just look at the stars Gretl appends on the right side of the p-value. One * means a 90% confidence level, ** means a 95% confidence level, and *** means a 99% confidence level. Just like hotels and restaurants, the more stars the better.
In the case of this estimation, we are 93.5% confident that the constant is statistically significant and 99.999% confident that the coefficient for l_imports is statistically significant.
You now have what appears to be a decent model for estimating future exports, at least for a 1-month ahead forecast. But since forecasting can’t be obviously so simple, we’ll look at some of the necessary tests to do to confirm we have a econometrically solid model to rely on.
[1] You can skip this step by directly specifying the lagged data series in the session screen. Select the variable you want then click Add->Lags of selected variables, then select the number of lags you want. Then in the model screen, select the lagged variable as the explanatory variable, rather than the original explanatory variable. Whether you take this step or the one I explained above, a lagged data series will be added to your dataset.
[2] You can do this manually within Gretl if you’re masochistic. The critical values can be accessed in the session screen, by clicking Tools->Statistical Tables, then selecting “t” from the tabs. Put value 160 in the “df” (“degrees of freedom”) box, and 0.025 in the “right-tail probability” box which corresponds to a two-tail 95% confidence level test. You should get a critical value of 1.9749. Since the constant has a t-ratio of -1.856, it is not statistically significant, but the coefficient for l_imports has a t-ratio of 36.36 which means it is at the 95% confidence level. You can vary the value for the “right-tail probability” box to obtain the critical values for other confidence levels.
Now that we have a dataset to play with, what can we do with it?
I based my simple trade models on the assumption that Malaysian imports and exports are cointegrated i.e. that there is a long term relationship between the two variables.
Intuitively, since 70%+ of imports are composed of intermediate goods, which are goods used as inputs into making other goods (including exports), we would expect a statistically significant relationship between exports and imports. For instance, exporters would have certain expectations of demand (advance orders) and order inputs based on that demand. After processing, the finished goods would then be exported.
In such a case, imports of inputs would lead exports by a time factor, depending on the length of time engaged in processing. This is something we can actually test, but I’ll leave that for later and just assume a lag of 1 month. Since imports are based on expected future export demand, we can then use imports to actually forecast exports.
There are some problems with making such a simplistic assumption of course (e.g. how do we account for exports with no foreign inputs), but since this is a demonstration of regression analysis and not an econometric examination of the structure of Malaysian trade, we’ll ignore it for now. In any case, for forecasting purposes, structure (economically accurate modeling) is less important than a usable and timely forecast.
If you’ve gone through the previous post, you’ll have export and import data loaded into Gretl and ready to go. The first step is to transform the data to natural logs. Select both variables (Ctrl-click), then go to the menubar and click Add->Logs of selected variables:
You’ll now have two additional variables called l_imports and l_exports:
The reasons we transform into natural log form is twofold: most economic time series are characteristically exponential with respect to time, and a log transformation changes the vertical scale to linear. Log transformations also make elasticity calculations easier, as the estimated coefficients are approximate to percentage changes in the variables.
To estimate the regression, click Model->Ordinary least squares…:
…which will give the model specification screen:
Select l_exports then click on the “Choose” button, which sets the log of exports as the dependent variable. Then select l_imports and click the “Add” button, which sets the log of imports as the explanatory variable. At this stage, you should note that the “lags…” button turns from greyed out to normal. Click on this, which brings up the lag screen:
Set the l_imports lag to 1 in both boxes, then click “Ok” and “Ok”.[1] You’ll now get the results window:
Don’t worry if the results window looks complicated, there’s only a few numbers that you really have to deal with...for now. First are the results of the estimation itself:
(l_exports)=-0.551950+1.07273(l_imports(-1))
The interpretation here is that a 1% rise in last month's imports causes a 1.07% rise in this month's exports. To have a look at your work, click Graphs->Fitted, actual plot->Against time, in the results window:
You should see this:
The red line displays actual values of l_exports, while the blue line represents the values from your estimated equation. Note that before 2000 the forecast errors are fairly large compared to after 2000, both under- and over-estimating exports. On the whole however, the results of the equation looks good, and seems to be a fairly accurate forecast model for exports.
Now, that wasn't so hard was it?
But we still have to be sure that this is a statistically significant relationship. Ordinarily, this involves a hypothesis test of the coefficients (null hypothesis=0), which involves using the standard errors to calculate a T-ratio, which is then evaluated against the critical values in a Student's T table.[2]
Since busy people can’t be bothered with stuff like that, Gretl very nicely lets you skip all those steps – all you have to pay attention to is the p-value. I could give a technical explanation for what it is, but all you have to know is that [1-(p-value) x 100] gives the confidence level. So if the p-value is 0.05, then you are 95% confident that the estimated coefficient is statistically significantly different from zero (statistically significant for short).
You can skip even this step, and just look at the stars Gretl appends on the right side of the p-value. One * means a 90% confidence level, ** means a 95% confidence level, and *** means a 99% confidence level. Just like hotels and restaurants, the more stars the better.
In the case of this estimation, we are 93.5% confident that the constant is statistically significant and 99.999% confident that the coefficient for l_imports is statistically significant.
You now have what appears to be a decent model for estimating future exports, at least for a 1-month ahead forecast. But since forecasting can’t be obviously so simple, we’ll look at some of the necessary tests to do to confirm we have a econometrically solid model to rely on.
[1] You can skip this step by directly specifying the lagged data series in the session screen. Select the variable you want then click Add->Lags of selected variables, then select the number of lags you want. Then in the model screen, select the lagged variable as the explanatory variable, rather than the original explanatory variable. Whether you take this step or the one I explained above, a lagged data series will be added to your dataset.
[2] You can do this manually within Gretl if you’re masochistic. The critical values can be accessed in the session screen, by clicking Tools->Statistical Tables, then selecting “t” from the tabs. Put value 160 in the “df” (“degrees of freedom”) box, and 0.025 in the “right-tail probability” box which corresponds to a two-tail 95% confidence level test. You should get a critical value of 1.9749. Since the constant has a t-ratio of -1.856, it is not statistically significant, but the coefficient for l_imports has a t-ratio of 36.36 which means it is at the 95% confidence level. You can vary the value for the “right-tail probability” box to obtain the critical values for other confidence levels.
Labels:
exports,
external trade,
Gretl,
imports,
seasonal adjustment,
seasonal effects
Monday, August 17, 2009
Gretl Tutorial I: The Basics
Following on from this post, I'm going to use Malaysian export/import data (as in my trade posts e.g. here) to illustrate how to use Gretl to create a simple trade forecast model, essentially recreating the two forecast models I've been using previously.
First though, is getting familiar with the program and how to get data into it. The actual interface is fairly basic:
You have a familiar Windows-style menu bar at the top, and some quick links in the bottom toolbar. The middle space is the session window, where your dataset will appear. I’ve yet to figure out how to cut and paste data into Gretl (the manual doesn’t mention it), so we’ll do this the old fashion way – through a file import. Luckily Gretl supports the standard Excel format (1997-2003, not the newer xml-based 2007 format).
So first is getting the data in. You can download trade data in Excel format from Bank Negara’s Monthly Statistical Bulletin (the June 2009 edition):
…which should give you this:
Unfortunately, the data is not in flat file format but more a representation of the actual printed copy, so some manipulation is in order here. First is expanding the whole spreadsheet to fully expose the data-points. Select the entire spreadsheet by left-clicking on the top-leftmost cell header (left of the “A” column header), then right click on any of the row headers and select “Unhide”:
Scrolling down, you should now see the full spreadsheet, with annual data from 1975, quarterly data from 1996, and monthly data from 1996. Select the monthly data for exports and imports from 1996 to 2009:
…and copy it to a new Excel book, then type “Exports” for the first data column and “Imports” for the second data column. You should get this:
Don’t worry about the dates for now, just save the new Excel file in 1997-2003 format and note where you saved it. Next, open Gretl and click File->Open Data->Import->Excel…:
Navigate to where you saved the file, and select it. When the file opens, select "Ok" then “Yes”:
…click “Time Series”, then “Forward”:
…click “Monthly”, then “Forward”:
Since our dataset begins in January 1996, change the value to “1996:01”, then “Forward”:
…Click “Apply”:
…and you should now have an “Export” variable and “Import” variable in your session window. To confirm, you can right click on any of the variables and select “Display values”, which should give you something like this:
You can also right click on any variable and select “Time series plot” to see a graph of the data over time:
Gretl saves your data in two separate ways – the data itself is saved to its own file (either in a Gretl format or into a database format), and the session (which covers what you’ve actually done to the data) in a separate file. That way, if you want to try something different, you can open the data into a new session, rather than messing about with your original session.
The next tutorial will cover regression estimation based on the trade data you’ve loaded into the program.
First though, is getting familiar with the program and how to get data into it. The actual interface is fairly basic:
You have a familiar Windows-style menu bar at the top, and some quick links in the bottom toolbar. The middle space is the session window, where your dataset will appear. I’ve yet to figure out how to cut and paste data into Gretl (the manual doesn’t mention it), so we’ll do this the old fashion way – through a file import. Luckily Gretl supports the standard Excel format (1997-2003, not the newer xml-based 2007 format).
So first is getting the data in. You can download trade data in Excel format from Bank Negara’s Monthly Statistical Bulletin (the June 2009 edition):
…which should give you this:
Unfortunately, the data is not in flat file format but more a representation of the actual printed copy, so some manipulation is in order here. First is expanding the whole spreadsheet to fully expose the data-points. Select the entire spreadsheet by left-clicking on the top-leftmost cell header (left of the “A” column header), then right click on any of the row headers and select “Unhide”:
Scrolling down, you should now see the full spreadsheet, with annual data from 1975, quarterly data from 1996, and monthly data from 1996. Select the monthly data for exports and imports from 1996 to 2009:
…and copy it to a new Excel book, then type “Exports” for the first data column and “Imports” for the second data column. You should get this:
Don’t worry about the dates for now, just save the new Excel file in 1997-2003 format and note where you saved it. Next, open Gretl and click File->Open Data->Import->Excel…:
Navigate to where you saved the file, and select it. When the file opens, select "Ok" then “Yes”:
…click “Time Series”, then “Forward”:
…click “Monthly”, then “Forward”:
Since our dataset begins in January 1996, change the value to “1996:01”, then “Forward”:
…Click “Apply”:
…and you should now have an “Export” variable and “Import” variable in your session window. To confirm, you can right click on any of the variables and select “Display values”, which should give you something like this:
You can also right click on any variable and select “Time series plot” to see a graph of the data over time:
Gretl saves your data in two separate ways – the data itself is saved to its own file (either in a Gretl format or into a database format), and the session (which covers what you’ve actually done to the data) in a separate file. That way, if you want to try something different, you can open the data into a new session, rather than messing about with your original session.
The next tutorial will cover regression estimation based on the trade data you’ve loaded into the program.
Labels:
exports,
external trade,
Gretl,
imports,
seasonal adjustment,
seasonal effects
Monday, August 10, 2009
June IPI: Creeping Up
June's IPI reading shows continued improvement in Malaysia's industrial output, albeit the recovery is extremely weak. Year on year growth numbers are still negative, but showing improvement (log annual changes; 2000=100):
...and month on month changes aren't terribly encouraging either (log monthly changes; 2000=100):
...but the indexes themselves have either flattened out or recovered (2000=100, seasonally adjusted):
The exception is Mining, but that can be attributed to higher prices and lower volume. It appears (barring a further downturn) that we actually hit bottom in December 2008, although it's hard to call what's happened since then a recovery given that the main index has been for all intents and purposes flat. What's interesting is that the electricity index is almost back to its long term trend channel:
That augurs well as far as the demand side is concerned: psychologically, people could be less concerned over the possibility of a deeper recession, so consumption is back up to nearly last year's levels - no need to watch the electricity bill so much. Alternatively on the supply side, factories have begun to spin up production again after a longer than normal winter shutdown - although I wouldn't argue that this will necessarily translate into higher production.
Or it may just be a lot warmer these days.
Whatever the reason, manufacturing has barely responded despite a recovery in electronics and electricals output, which suggests the demand side explanation is probably more correct.
As to when we'll see a broader more robust recovery, my guess is that will start kicking in when the funds from the stimulus packages begin being spent, rather than just disbursed. Production of some of the construction related manufactures are already up, but getting this spending into the broader economy might take a few more months.
...and month on month changes aren't terribly encouraging either (log monthly changes; 2000=100):
...but the indexes themselves have either flattened out or recovered (2000=100, seasonally adjusted):
The exception is Mining, but that can be attributed to higher prices and lower volume. It appears (barring a further downturn) that we actually hit bottom in December 2008, although it's hard to call what's happened since then a recovery given that the main index has been for all intents and purposes flat. What's interesting is that the electricity index is almost back to its long term trend channel:
That augurs well as far as the demand side is concerned: psychologically, people could be less concerned over the possibility of a deeper recession, so consumption is back up to nearly last year's levels - no need to watch the electricity bill so much. Alternatively on the supply side, factories have begun to spin up production again after a longer than normal winter shutdown - although I wouldn't argue that this will necessarily translate into higher production.
Or it may just be a lot warmer these days.
Whatever the reason, manufacturing has barely responded despite a recovery in electronics and electricals output, which suggests the demand side explanation is probably more correct.
As to when we'll see a broader more robust recovery, my guess is that will start kicking in when the funds from the stimulus packages begin being spent, rather than just disbursed. Production of some of the construction related manufactures are already up, but getting this spending into the broader economy might take a few more months.
Yes, Folks, You Too Can Do This At Home!
I’ve had a couple of questions about what software I’ve used in the analyses in this blog. I’m a big fan of EViews, so much so I actually bought a license for it – it’s been around a long time, is comparatively user-friendly, and has 99% of the functions most econometricians need.
Unfortunately it’s also priced to kill. Luckily I was still doing my Masters at the time so I qualified for a student discount, which takes 60% off the retail price. If anybody’s interested in a copy, you can contact Statworks in PJ – they’re the local distributors.
So what happens if you need to do some forecasting or econometric work, and don’t want to lose an arm and a leg? Excel just doesn’t cut it, even with some of the advanced plug-ins available – you might be able to do multivariate regressions, but diagnostics will get you stuck. And forget about more advanced estimation techniques such as VAR or ARCH.
The best alternative I’ve found, if you don’t want to deal with scripting or programming, is Gretl. Gretl is open-source, supports lots of platforms, and is fairly feature complete – in some ways it’s more powerful than EViews. It’s definitely not as user friendly (you can’t for instance just paste in a series from an external source), and graphs are very basic, but all the important bits are there and then some. The Windows version is available here.
But how do you use this thing? I’ll cover some of the basics in a series of posts, using real Malaysian data to illustrate. Stay tuned.
Unfortunately it’s also priced to kill. Luckily I was still doing my Masters at the time so I qualified for a student discount, which takes 60% off the retail price. If anybody’s interested in a copy, you can contact Statworks in PJ – they’re the local distributors.
So what happens if you need to do some forecasting or econometric work, and don’t want to lose an arm and a leg? Excel just doesn’t cut it, even with some of the advanced plug-ins available – you might be able to do multivariate regressions, but diagnostics will get you stuck. And forget about more advanced estimation techniques such as VAR or ARCH.
The best alternative I’ve found, if you don’t want to deal with scripting or programming, is Gretl. Gretl is open-source, supports lots of platforms, and is fairly feature complete – in some ways it’s more powerful than EViews. It’s definitely not as user friendly (you can’t for instance just paste in a series from an external source), and graphs are very basic, but all the important bits are there and then some. The Windows version is available here.
But how do you use this thing? I’ll cover some of the basics in a series of posts, using real Malaysian data to illustrate. Stay tuned.
Sunday, August 9, 2009
Does Finance Create Wealth?
What’s the role of the financial sector in an economy? I came across this question the other day while vetting something (what I cannot say) and the answer was, believe it or not, “to create and maintan economic wealth”.
Stuff like this drives me bonkers.
There’s this perception, especially in the media, that finance and especially the stock market is a way of creating wealth. Sorry but that simply isn’t true, except in a narrow sense for the go-between – the banker, the insurer, the stock broker etc. The economic textbook answer of course is that the financial sector functions as an intermediary: between investors and companies, between borrowers and lenders, between buyer and seller.
Since the intermediary earns fees for this service, from an economic viewpoint this is viewed as income which can be accumulated as wealth. But the actual flow of funds that the intermediary handles does not of itself constitute wealth creation, even if one side or the other makes a profit off the transaction.
How come? Let’s take a stock market example. Say you buy shares in company A for RM1.00 a share, and later sell it for RM1.50. You’ve made a trading profit of RM0.50, and have obviously increased your wealth.
But does the aggregate wealth of the system rise? No, it doesn’t – you’ve got counterparties in each buy/sell transaction. Somebody received your RM1.00 in return for the “A” company shares, and somebody paid you RM1.50 for those same shares. In short, this is a zero-sum game – no wealth is created, it’s just passed around.
So where does this idea of finance creating wealth come from? Because the perception is that share prices tend to go up over time (at least, if you’ve invested in a good company) – hence, the monetary value of shares held by investors likewise go up.
So wealth is created right? Not exactly. Shares are nothing more than a claim on the assets and value of the business of a company. If the value of the company goes up, then the underlying value of the claims represented by the shares of the company will also go up, which usually means the share price also goes up.
But here’s the key point – wealth creation has occurred at the company level, not in the stock market. If the underlying value of a company stays pat, then anybody making a profit trading in the shares of that company did so at someone else’s expense, since the value of the claim represented by those shares has not changed.
Corporate finance exercises, M&A exercises - it doesn't make a difference. Changes in share prices due to these exercises occur because the real and/or perceived value of a company has changed. Again, the change in wealth and value have occured at the company level, and share prices just reflect that.
Zero-sum game. The same thing occurs in banking and insurance, if not quite in so straightforward a manner.
That’s why bubbles are so damaging. Because the difference between reality and perception get so large, somebody (usually the professionals) makes an obscene profit, and somebody (usually the sucker retail investor) takes massive losses. And the aftermath of bubbles typically sees underlying values fall as well, so everybody takes a further hit.
So does finance create wealth? No, nyet, nada, non, nein!
Stuff like this drives me bonkers.
There’s this perception, especially in the media, that finance and especially the stock market is a way of creating wealth. Sorry but that simply isn’t true, except in a narrow sense for the go-between – the banker, the insurer, the stock broker etc. The economic textbook answer of course is that the financial sector functions as an intermediary: between investors and companies, between borrowers and lenders, between buyer and seller.
Since the intermediary earns fees for this service, from an economic viewpoint this is viewed as income which can be accumulated as wealth. But the actual flow of funds that the intermediary handles does not of itself constitute wealth creation, even if one side or the other makes a profit off the transaction.
How come? Let’s take a stock market example. Say you buy shares in company A for RM1.00 a share, and later sell it for RM1.50. You’ve made a trading profit of RM0.50, and have obviously increased your wealth.
But does the aggregate wealth of the system rise? No, it doesn’t – you’ve got counterparties in each buy/sell transaction. Somebody received your RM1.00 in return for the “A” company shares, and somebody paid you RM1.50 for those same shares. In short, this is a zero-sum game – no wealth is created, it’s just passed around.
So where does this idea of finance creating wealth come from? Because the perception is that share prices tend to go up over time (at least, if you’ve invested in a good company) – hence, the monetary value of shares held by investors likewise go up.
So wealth is created right? Not exactly. Shares are nothing more than a claim on the assets and value of the business of a company. If the value of the company goes up, then the underlying value of the claims represented by the shares of the company will also go up, which usually means the share price also goes up.
But here’s the key point – wealth creation has occurred at the company level, not in the stock market. If the underlying value of a company stays pat, then anybody making a profit trading in the shares of that company did so at someone else’s expense, since the value of the claim represented by those shares has not changed.
Corporate finance exercises, M&A exercises - it doesn't make a difference. Changes in share prices due to these exercises occur because the real and/or perceived value of a company has changed. Again, the change in wealth and value have occured at the company level, and share prices just reflect that.
Zero-sum game. The same thing occurs in banking and insurance, if not quite in so straightforward a manner.
That’s why bubbles are so damaging. Because the difference between reality and perception get so large, somebody (usually the professionals) makes an obscene profit, and somebody (usually the sucker retail investor) takes massive losses. And the aftermath of bubbles typically sees underlying values fall as well, so everybody takes a further hit.
So does finance create wealth? No, nyet, nada, non, nein!
Wednesday, August 5, 2009
June Trade: Better Than Forecast
Last month's model forecasts suggested that exports would be more or least the same levels as in May, if not lower. So it's a pleasant surprise to see trade growth sustained in June instead (log monthly changes):
Year-on-year growth still sucks, but not quite as bad as before (log annual changes):
Given the interval forecasts, June's results don't invalidate my weak recovery/inventory bounce thesis just yet; though in the case of my seasonal adjustment model it's awfully close - seasonally adjusted exports reached RM44.5b, just a hair over the upper 95% interval forecast of RM44.4b. The fact that half the increase in June over May came from electricals and electronics exports certainly suggests the potential for an inventory adjustment phase rather than a true demand-led recovery in exports. Price movements of major commodities were mixed in June, with rubber and palm oil down, and crude oil and tin up:
...which to me means that leakage from China's recovery is getting through, but is being tempered by prices. I have my doubts as to how sustainable China's recovery actually is, given how much appears to be wasted on "unproductive" activities.
Going forward, given the structure of the simple models I've constructed the July model point and interval forecasts will definitely point higher, which paradoxically means the risk of undershooting the forecast is also likely to be higher if what I think is going on is right. On the other hand, if July numbers match or exceed the forecast, then chances are we are seeing a sustained recovery in external demand. That in turn means we have a better than even chance to see positive GDP growth in 3Q - one can hope.
Next month's forecasts:
Seasonally Adjusted Model:
Point forecast:RM43668, Range forecast:RM49080-RM38255
Seasonal Effect Model:
Point forecast:RM45408, Range forecast:RM50891-RM39925
Technical Notes:
June trade data from Matrade. Details on how the models were constructed are here.
Year-on-year growth still sucks, but not quite as bad as before (log annual changes):
Given the interval forecasts, June's results don't invalidate my weak recovery/inventory bounce thesis just yet; though in the case of my seasonal adjustment model it's awfully close - seasonally adjusted exports reached RM44.5b, just a hair over the upper 95% interval forecast of RM44.4b. The fact that half the increase in June over May came from electricals and electronics exports certainly suggests the potential for an inventory adjustment phase rather than a true demand-led recovery in exports. Price movements of major commodities were mixed in June, with rubber and palm oil down, and crude oil and tin up:
...which to me means that leakage from China's recovery is getting through, but is being tempered by prices. I have my doubts as to how sustainable China's recovery actually is, given how much appears to be wasted on "unproductive" activities.
Going forward, given the structure of the simple models I've constructed the July model point and interval forecasts will definitely point higher, which paradoxically means the risk of undershooting the forecast is also likely to be higher if what I think is going on is right. On the other hand, if July numbers match or exceed the forecast, then chances are we are seeing a sustained recovery in external demand. That in turn means we have a better than even chance to see positive GDP growth in 3Q - one can hope.
Next month's forecasts:
Point forecast:RM43668, Range forecast:RM49080-RM38255
Point forecast:RM45408, Range forecast:RM50891-RM39925
Technical Notes:
June trade data from Matrade. Details on how the models were constructed are here.
Labels:
exports,
external trade,
imports,
seasonal adjustment,
seasonal effects
Subscribe to:
Posts (Atom)