A fine-tuned universe without a fine tuner

Physicists are no better than their biologist cousins. The universe has parameters that are just exactly right. If they were even slightly different, nothing would hang together and human life could never exist. So rather than thinking that the universe may have had something like us in mind, some kind of end project of which we are one part, the alternative now beckons. In this multiverse, everything is random chance but with an almost infinite number of universes one of them was bound to have parameters that would allow human life to emerge. Just chance, not purpose.

So that is the story found in the latest Scientific American. In an article titled, “New Physics Complications Lend Support to Multiverse Hypothesis” here we find the problem set out:

With the discovery of only one particle, the LHC experiments deepened a profound problem in physics that had been brewing for decades. Modern equations seem to capture reality with breathtaking accuracy, correctly predicting the values of many constants of nature and the existence of particles like the Higgs. Yet a few constants — including the mass of the Higgs boson — are exponentially different from what these trusted laws indicate they should be, in ways that would rule out any chance of life, unless the universe is shaped by inexplicable fine-tunings and cancellations.

Yes, inexplicable and just so to the ten-thousandth decimal point and there are many of them just like that. What to do, what to do? How can this be explained without an outside agency?

Physicists reason that if the universe is unnatural, with extremely unlikely fundamental constants that make life possible, then an enormous number of universes must exist for our improbable case to have been realized. Otherwise, why should we be so lucky? Unnaturalness would give a huge lift to the multiverse hypothesis, which holds that our universe is one bubble in an infinite and inaccessible foam. According to a popular but polarizing framework called string theory, the number of possible types of universes that can bubble up in a multiverse is around 10^500. In a few of them, chance cancellations would produce the strange constants we observe.

That’s it. There must be 10^500 universes all different but each on its own so that one or two might have just exactly the right physical constants for life to exist. That is so much more plausible. And I especially like this:

The energy built into the vacuum of space (known as vacuum energy, dark energy or the cosmological constant) is a baffling trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion times smaller than what is calculated to be its natural, albeit self-destructive, value. No theory exists about what could naturally fix this gargantuan disparity. But it’s clear that the cosmological constant has to be enormously fine-tuned to prevent the universe from rapidly exploding or collapsing to a point. It has to be fine-tuned in order for life to have a chance.

A fine-tuned universe without a fine tuner. Our existence is therefore a probability of one in 10^500, obviously much more likely than the existence of some entity to set it all in place.

Romney’s plan for the presidency

This is almost too disorienting for words. It is the Romney plan on how to move into government once the election was finally won. As far as it is possible to be from the chaos of the second Obama administration, a world utterly different that maps out a future and is filled with an articulate sense of where to go and how to get there. But this is only a loss that will be felt by those who could not imagine voting for Obama. The rest won’t know the difference assuming that if they came across this, would not feel this was a bullet they only just missed.

Among the recommendations for the Romney administration:

Corporate-style training seminars were planned for appointees and nominees before the inauguration to teach management skills.

A plan to restructure White House operations to suit Romney’s corporate management style, with clear deliverables.

Detailed flow charts delineating how information and decisions were disseminated through the administration to achieve “unity.”

Plans to evaluate Cabinet secretaries’s performance by “systematically assessing the efforts of their departments in contributing to [Romney’s] priorities and objectives, perhaps by a newly created ”deputy chief of staff for Cabinet oversight.”

More than 100 detailed one-page project management sheets were in circulation at R2P headquarters by Election Day, charting the organization’s progress and preparing for the run-up to inauguration. Movements for Romney, his wife Ann, and Vice President-elect Paul Ryan were heavily choreographed for the days following the election, and many campaign staffers were told to prepare to assume roles on the transition immediately following a victory. (All were guaranteed a job on either the transition or the inaugural committee.) A painstakingly prepared seating chart and floor plan was developed for Romney, his aides, and transition staff across three floors of the Mary E. Switzer Building in downtown Washington, ready for the rapid post-election expansion.

But go into the document itself to have a look at the incredible amount of detail in preparation for the transition.

As you start your way into life . . .

A commencement address from my generation to this generation. How different the world is for idealistic youth as they try to figure out how to patch it all back up after the social attitudes they have taken four years to assimilate. A sample but worth reading the lot:

Your life has been a nonstop ride of work, study and fun. Now, though, you’re about to walk out of those iron gates and … what? You’re headed into the most challenging labor market of the last 80 years.

Because you’re driven and have been told over and over in speeches like this one to follow your passion, you’re going to write eye-catching job query letters and send them with bulging resumes to the heads of Greenpeace, the Aspen Music Festival, ESPN, the Clinton Global Initiative, “The Colbert Report” and Tesla Motors TSLA -5.29% .

That will take three days. Then you’re going to have months and possibly years of free time ahead of you. Free time that you won’t know how to fill, because you’ve never really had any before.

Grim and not so funny.

Kafka and thought control

This is such a subtle article that fills me with endless admiration. By Daren Jonescu and titled Kindergarten and the Kafkaesque the point is that when the leftist loons go after five year olds for pointing fingers at someone and saying, “bang, bang” there is a real intent behind it that actually ends up shaping society in the intended way. And it is more than just about guns. As he writes:

No one ever mistook a half-eaten Pop Tart for a weapon. And that is precisely why you are forbidden from saying ‘bang, bang’ while wielding a half-eaten Pop Tart. If this still makes no sense to you, that is because you are not crazy. But try, for a moment, to put yourself into the twisted psyche of a progressive authoritarian, and ask yourself this question: What is the message being sent through such rules, and the lesson being taught through their enforcement?

These are all of a piece with the way we enforce speech codes by making certain expressions of our beliefs beyond the pale in acceptable society. Eventually everyone will understand that it is against the rules of “civilised” society to have a positive view about guns and gun culture but it goes beyond just guns. It is everything the left doesn’t like that they turn into the equivalent of swearing in public. Everybody learns to behave themselves because there is an ever present danger that they will be hauled before the PC courts of public opinion.

The ultimate goal is not to punish such thoughts; punishment is merely the means. The real goal is to break the young soul to self-censorship and self-accusation regarding all thoughts related to personal efficacy, individual power, independence, and self-defense. A submissive citizen does not ‘cling’ to his weapons. Therefore, future citizens must be taught that such ‘clinging’ is a vice. Submission to the collective is the goal. Seen from that perspective, it is quite logical to try to make children self-conscious about how they eat their Pop Tarts, lest they appear to be ‘threatening’ society. Notice, they are not actually threatening any person; their threat, being imaginary, is abstract. It is a threat to ‘other students’ in the abstract, to the collective. The child is learning to feel guilty if he catches himself in possession of thoughts unacceptable to the state as such; that is, he is learning to submit.

This is, as he notes, a world of insanity but it is designed to shape the future so that these five year olds will know they can do the “bang, bang” routine or their equivalent in other areas of social censorship with trusted friends but never, but never, when out of the house and in mixed company. It will become as great a faux pas to speak positively of such matters as it is to smoke inside a building. You will be shunned and cast out from society. It will be impossible to have such views and travel in the company of our social and political elites. Which is why he brings in Kafka and the future we are creating:

Kafka’s world is our world. The nightmare logic of infinite bureaucratic authority which drives a man into admitting his own guilt without even understanding what he is accused of is the mechanism of public school indoctrination. And like Kafka’s Josef K., we are all, in the compulsory progressive public school, to learn how to self-accuse, to self-incriminate, to self-condemn. And then, at the end of our submissive life of democratic self-enslavement, socialized medicine will treat us to the ignominy of an ending worthy of Josef K. — “‘Like a dog!’ he said; it was as if the shame of it should outlive him.”

It is the way it happens and it is how we are controlled in the modern world. Some things just can no longer be said without risking one’s entire career and social position. Being pro-gun is now becoming one of those ideas in the way that other forms of expression have been leached from elite society. Fascinating to see this in action but also extremely depressing.

Prelinger archive

The Prelinger Archive seems to be a vast storehouse of old footage from ancient days. It has come t light because The Atlantic has put up a video from 1961 on what they have titled, no doubt ironically, The Wonderful World of Capitalism.

The problem is a Keynesian problem. The goods and services can be produced but how are we going to get people to buy the volume of output that can now be produced. The answer is marketing which is to be the saviour.

It is also somewhat noteworthy that the portrayal of this vast outpouring of goods and services was put together at the very end of the Eisenhower administration, at the very moment that Kennedy and Johnson would start the unwinding process of this productive miracle.

The right way and the Gillard way

A quite interesting story on education reform in The Australian. The Prime Minister sort of picked up the idea in 2008, when she “returned from a trip to New York City fired up by the vision of then schools chancellor Joel Klein and his radical overhaul of the city’s schools.” See if you can pick up the salient difference between the Klein model and the Prime Minister’s. The bolding is mine:

Klein graded schools every year on a scale from A to F on the improvement recorded in students’ results, and schools that failed to improve were closed down.

In Australia, transparency is used to identify struggling schools and support them with additional resources – financial, or targeted teaching programs.

In New York they rewarded success and penalised failure. Here we are to reward failure and penalise success. For those who can’t spot the difference – the PM for example – there is a very too-the-point accompanying article by Kevin Donnelly.

According to a government information sheet the additional millions can be spent on “more teachers and better resources”, introducing “personalised learning plans”, “more teachers, teacher aides, support staff”, “better resources and equipment, like SMART Boards, computers, iPads and tablets” and “new and better ways of teaching”.

The flaws in such an approach are manifest. The assumption that simply by giving schools more resources and more staff disadvantage will be overcome and standards will improve is incorrect.

As Donnelly concludes:

No matter how much cultural-left groups such as the Australian Education Union want to believe, low SES is not the principal cause of underperformance. Working-class kids, with effort, ability, application and effective teachers and a rigorous curriculum, can do well. Such students are not destined to failure because of postcode.

It doesn’t take a cultural-left view to recognise that the main reason Australia’s standing in international tests has flatlined or gone backwards is because we have fewer high-ability students performing at the top of the table.

If the Prime Minister is serious about getting Australian students to be among the top five performing countries in literacy and numeracy tests by 2025, then one of the most cost-effective and efficient ways is to better support gifted students.

Promoting competition and meritocracy in education represents an alternative to Gillard’s cultural-left, Fabian-inspired approach.

To a leveller, such a thought is beyond comprehension. They know only how to pull down, never how to build up.

Melbourne in 1910

It’s Melbourne, all right, but so different world. Will it be just as familiar in 2110? It will have to be the folks then who will decide but there will be a lot more images for them to look back on.

The only serious surprise in these pictures was that when they flashed up “Federal Parliament” they showed the Victorian Parliament Buildings. I had thought that the first Parliament was in the Exhibition Building but I guess after that they moved to Spring Street. And I also loved the front open seats on the trams. Must have been the perfect place on a summer afternoon, not so good perhaps in winter.

Krugman polemics

The Rogoff/Reinhard v Krugman debate is more left propaganda than an actual genuine debate over economic theory or statistical measurement. There is a fascinating thread on the Econbrowser website which more than anything else demonstrates that so far as economics narrowly considered is concerned, this is not an area in which amateurs have anything to add. But as to the polemics of economic policy debate, it is an attempt, as usual, by the left to shut down and close out any discussion of views that are different from theirs. On the comments thread, I will start with the only comment that discussed the political side of this controversy:

It is rather telling to read the comments attacking Rogoff and Reinhart, and Professor Hamilton for defending them. In the Keynesian view, the notion that government spending cuts can be beneficial is so harmful that it must be fought with all necessary means. Proponents must be shown to be bad actors, hacks, liars.

If small cutbacks are beneficial, larger cuts may be proposed, and the next thing you know, the entire Keynesian edifice may be in danger. If people began to ask whether specific governmental expenditures are worth diverting funds from private use (through borrowing or taxation), then you have a problem. I think this fear is what drives the vehemence of the Keynesian crowd’s attacks on R&R and anyone who would defend them.

Now to the technical part. If R and R are wrong, who will ever know? Here is part of the defence this time from a different commenter who goes by the name Rick Stryker (which might even be his real name):

I understand the problem that many commenters aren’t familiar with technical arguments and so don’t know how to judge. Let me try to explain the weighting issue intuitively.

Let’s forget economics and look at a simple situation. Suppose we have to decide what what the legal drinking limit is going to be, i.e., the blood alcohol level before it’s unsafe to drive. We take 6 men and every day give them enough to drink to raise their blood alcohol level to some point, let’s say it’s 1 on some scale. Each day we measure each man’s reaction time. A reaction time greater than 10 is unsafe to drive. We want to know if a reading of 1 means that you are driving drunk.

We are lucky enough to do the experiment on the first man for 100 days but can only get 10 days of data each for the other 5

Here are the measured 100 reaction times of the 1st man.

8.6 8.2 10.0 7.5 9.8 7.5 10.6 8.2 7.7 8.7
9.2 9.2 8.2 10.1 10.1 7.4 9.0 10.3 8.2 9.6
8.6 9.2 9.1 9.3 9.0 7.0 9.5 9.0 7.9 9.9
9.9 8.4 9.4 9.0 7.5 8.3 9.5 8.3 6.8 10.5
8.2 9.3 7.4 8.6 10.4 9.0 10.4 8.5 10.0 8.6
8.1 8.3 9.9 8.8 9.2 9.2 10.0 8.9 6.5 8.7
8.2 9.3 9.7 6.8 8.6 7.5 8.9 12.1 8.9 9.6
9.0 10.3 10.1 7.4 9.7 7.5 9.2 7.3 8.0 9.1
8.6 8.8 7.7 8.0 8.6 10.2 8.5 9.2 9.9 8.3
9.5 11.8 8.5 9.2 7.8 6.8 8.9 10.1 8.8 9.0

You can see on many days he’s too drunk to drive but not on all or even the majority of days.

Here are the reaction times of the other 5 men.

2 3 4 5 6
1 10.4 11.5 10.1 11.3 12.5
2 9.0 11.3 8.3 12.9 13.2
3 11.4 11.6 10.0 13.2 14.0
4 11.2 12.8 9.3 11.5 12.1
5 9.5 11.1 8.7 9.9 12.8
6 11.4 10.5 8.9 13.1 12.5
7 11.4 11.7 11.4 12.6 13.1
8 11.2 12.5 10.9 11.5 13.2
9 10.8 13.2 9.2 11.5 12.5
10 12.2 12.5 10.0 11.4 13.9

Now how do we summarize our findings? If we assume that each man’s capacity to hold his liquor is the same as every other and that the only variations are in what they ate that day, etc., you would just take all 150 data points and average them. If you did that, you’d get a reaction time of 9.7. Thus, you’d conclude a blood alcohol level of 1 is OK.

However, what if you looked at the individual averages of reaction times? Here’s what you’d get for each man.

1 2 3 4 5 6
8.9 10.8 11.9 9.7 11.9 13.0

Here it becomes obvious from the individual averages that the first guy is different from everyone else–he’s much better at holding his liquor. In fact, everyone is different as we might expect but the majority are drunk on average. So, averaging the first man’s 100 data points in with the 50 of the other 5 men will exaggerate the first guy’s influence and make it look like they can all hold their liquor.

It would be better just to average the averages, in which case you’d get 11 and you’d conclude that a blood alcohol level of 1 is unsafe to drive. That summarizes what’s actually going on better.

HAP [the critics of R&R] did the first estimation and assumed that all the men were the same. R&R did the second method and assumed that the men were in fact different. You can see that the second method is more justifiable if you have any reason to believe that the men are different. Since R&R are talking about growth rates of countries, we certainly have reason to believe they are different.

Moreover, the assumption that the averages are different is the standard starting assumption when analyzing data that is both cross sectional (different men) and time dependent (different days).

Strangely enough, HAP and Krugman accused R&R of doing something non-standard and an “error” by using the second method. I hope it’s clear intuitively how this is wrong from this example. In fact, what HAP and Krugman are proposing is non-standard.

What I asked 2slugbaits [the commenter to whom this comment is addressed] to do would have established that the way we did the average over the 150 data points would have fallen out of the simpler model I gave him if he had done the math. That’s HAP. And the way we did the average of average estimates would have fallen out of the fixed effects model, if he’d done the math. That’s R&R.

Hope this helps.

I don’t know if it helped anyone else, it did help me. It didn’t help 2slugbaits who replied:

Rick Stryker You have completely wasted your time. First, what you described is not what normally passes for a fixed effects panel model. For starters, you either have to establish a separate dummy for each panel (which eats up degrees of freedom) or you have to subtract the global mean from each observation before regressing…which eliminates any time invariant variables and is one of the reasons why random effects models are preferred. You didn’t do either one, so yours is not a fixed effects model. So what you seem to think is a fixed effects model is not. Second, neither HAP nor R&R ran a fixed effects panel model. R&R just lumped things into four buckets, took a simple average of each country’s observations within each bucket, and then took an average of the country averages. That’s it. That’s all they did. HAP skipped the second step and just took an average of all observations within each bucket. That’s why I said that if you wanted to replicate what HAP and R&R and do it in an overly complicated way by treating it as a regression, then you should just regress the observations and against a constant. Which is exactly what you did:

HAP make the assumption that a(i) = a, an unknown constant, and estimate

Y(i,t) = a + e(i,t)

where “a” is a constant. But there is no need for a “t” subscript unless that is supposed to represent one of four buckets…in which case the natural choice would a “b”.

But why would anyone in his right mind do that? Why not just say “take the average”? Third, you have obviously misinterpreted what JDH was talking about when he said that R&R took a panel approach. He clearly did not mean that literally…and if he did then he should have his license revoked. JDH meant that the R&R approach captured the intuition of a fixed effects panel model in that it tried to pick-up each country’s unique features.

And really…we all know how to derive a regression using matrix algebra.

BTW, plenty of new number crunching on the R&R data came out today…and all of it crushed the core of the R&R argument. See especially Miles Kimball’s work. And he originally very sympathetic to R&R’s position but has reluctantly concluded that their work is deeply flawed and worthless.

So Rick Stryker went back again:

I’m certainly wasting my time trying to explain this to you. You are back to your semantics. What we call the models doesn’t matter. I stated precisely what the models were and asserted that estimation of one would lead to R&R and a special case of that same model would lead to HAP. I challenged you to derive the estimators and confirm or deny my claim. I could see that you didn’t seem to understand and wanted you to demonstrate some comprehension of these issues. I was very clear in what I asked you to do. You couldn’t do it. I gave you over 2 days. Now, I’ve shown you exactly how to do it and you still don’t understand. You obviously know nothing about the issues you comment about, not that that stops you or any of Krugman’s other defenders.

The point of all this was for you and others to see that when HAP and Krugman claimed that R&R did something “odd” and an “error” they were just flat out wrong. But you will not to see.

This is why I keep talking about Krugman zombies. The level of illogic and irrationality is breathtaking.

To which the following reply was returned:

This whole sorry saga of R&R is reminiscent of a similar issue with Martin Feldstein back in 1974 in which he claimed to show that Social Security reduced private savings. Like R&R, he used these results politically to push his pet cause, in his case a campaign against Social Security. Once again, years later, two other economists, after a long struggle to get the data, found a coding error in the computer program which when corrected, caused the claimed results to disappear.

Posted by: Joseph at May 30, 2013 08:57 PM

Rick Stryker I stated precisely what the models were and asserted that estimation of one would lead to R&R and a special case of that same model would lead to HAP.

So what’s your point? What I said was that the way you were approaching this is flat out stupid and convoluted. It is certainly possible to take data into something like EViews, run it through a pooled cross-sectional fixed effects model and get an answer that exactly matches the way R&R and HAP did things. But you will get exactly the same answer by taking an average of each cross-sectional unit and then taking an average of all cross-sectional units, which is how R&R actually did their analysis. Now if you want to call the former exercise a fixed effects approach, then be my guest, but it’s a mighty odd one. When people talk about fixed effects models they usually have in mind a model that has slope coefficients as well as just a constant and fixed effects deviations. No one estimates two-dimensional pooled cross-sectional data as a special case of a fixed effects model. JDH was not saying that R&R actually did anything as stupid as run the numbers through a fixed effects model. JDH’s point was that their approach tried to capture some of the intuitions of a fixed effects approach, but they did so in a more straightforward way; i.e., just simple averaging in Excel. Doing things your way makes about as much sense as wanting to go from New York to Chicago by heading east. It’s possible to do that, but not very bright. Same with your crazy example of finding a simple mean by regressing against a constant. Yes, you could also call a simple mean a special case of a linear regression, but no normal person would do that…except I will note that you in fact did just that. Go figure.

With your latest tangent I take it that you have given up trying to defend R&R’s analysis. Both of their key points have been fatally undercut. The 90% “threshold turns out to no threshold at all. And the causality issue has also collapsed. Not only do high debt/GDP ratios fail to predict lower future growth, weak exogeneity tests failed to show a causal relationship.

And as the final posting from Rick Stryker to which nothing more has been added since, we have this:

I know this is a waste of time to continue to discuss this with you, but for the benefit of whoever is not bored silly with this and wants to learn something, I’ll try again.

The question I want to answer is, “Is Krugman right that R&R used an odd estimation technique?”

In order to answer that, we need to understand what the underlying assumptions are in each estimation method. So we need to write down the conceptual models that are equivalent to the estimation techniques. I’m not saying that R&R and HAP literally ran these conceptual models using statistical software, but rather these models are equivalent to what they did. The advantage of writing the models down is that we can see the underlying assumptions clearly.

I asserted that the R&R method is equivalent to estimating the model

Y(i,t) = a(i) + e(i,t) (1)

and averaging the estimated a(i). I also asserted that the HAP method is equivalent to estimating the model

Y(i,t) = a + e(i,t) (2)

If we can agree on that, then we can immediately see that HAP is a special case of R&R in which all means are assumed to be equal. We can also see that if anyone is making an odd assumption in cross sectional data, it’s HAP not R&R. We need to resolve this question because Krugman has asserted yet again in his latest post the unsubstantiated claim that R&R used an odd estimator.

You responded to this argument with a series of points that were irrelevant. For example, the fact that R&R and HAP didn’t literally run these models is irrelevant to the argument.

To keep us on track, I narrowed the point to just the question of whether the models I wrote down are equivalent to the estimators as I asserted. I asked you to derive the estimators. That way, it’s clear whether I’m right or not. If you derive the estimators and show that they aren’t equivalent to R&R and HAP, then my argument fails. But if you derive them and get HAP and R&R, then you will have demonstrated to yourself that a key assertion in my argument is correct.

But despite my request, you did not derive the estimators. Instead, you responded again with the irrelevant point that R&R and HAP didn’t actually run these estimators. At this juncture, I realized that you really don’t understand the point at all and can’t derive these simple estimators. I was frankly annoyed that I was wasting my time with you. I was also quite irritated that you are attempting to defend Krugman when you don’t understand these issues at all.

I gave you a day before I said anything. I thought that you might try look up the solution in an econometrics book. After 2 days, I did the derivation for you.

Amazingly enough, despite the fact that I laid out the derivations for you, you are still fundamentally confused. That’s why you need a conceptual model–to avoid confusion. For example, in your penultimate comment you said

“The R&R approach is also wasteful of information because it effectively throws away the country specific variance. That’s a bad feature of any model. The HAP model at least doesn’t throw away information.”

If you look at the models I wrote down and understand the derivations, then you can see that this statement is wrong. Look at the random effects model I wrote down:

Y(i,t) = a(i) + e(i,t)

where now the a(i) are iid random variables with mean a and variance v. Now, let v, the country specific variance, go to zero, i.e., throw away the country specific variance. What do you get? Not R&R as you claimed, but HAP!!

I think the argument I have laid out is exactly what JDH was saying. It must be frustrating for him too to watch this. He can blame me for not being clear enough in explicating it but his original point on fixed vs. random effects is absolutely right.

Also, I noticed that Krugman has backed away from one of the elements of his and HAP’s smear in his latest post, and is now saying that R&R’s excluded data was not intentional and perhaps unavoidable. But he still is claiming that the R&R estimator was “odd.” I wonder if he will back away from that assertion too? He should back away from both completely but that’s not enough. He should apologize.

Who’s right on the economics and econometrics, who can say? But who won out on the political side of the debate, it is a hands down win for Krugman. But if the American economy does start to tick up, it won’t be because of some stimulus but because of the sequester which is starting to bring down the rate of growth in public spending.

Krugman in excess supply

Here in today’s AFR we find that Paul Krugman rant I referred to the other day. Such a stupid inane roll out of idiocies. I once thought that Keynesian economic theory would soon be rooted out of our textbooks but not so soon after all. Between the publishing industry with that backlog of a thousand macro texts sprouting Y=C+I+G, along with the apparent impossibility even for economists to work out why spending without creating net value is bad for an economy, we will just drive ourselves deeper into the bog. But whatever one might think of Krugman’s useless and damaging economics, he certainly does get around. Whether anyone else can read his stuff besides me is quite a question. I like it because it gives me a perverse pleasure to see just how ridiculously wrong what he writes is, but how does anyone else get through it? And are they any wiser at the end, as in, do they feel enlightened in any way?

Here are two examples of where large cuts to public spending and the deficit were immediately followed by a strong and prolonged upturn.

Case Study I: The end of World War II.

Case Study II: The Howard-Costello budgets in 1996 and 1997.

So this is how to understand the past four years and the problem with cuts to spending. We are in Adelaide and want to end up in Melbourne so we drive west for 1000 miles. When it finally dawns on everyone that we have been going in the wrong direction the problem is that by then you are 1000 miles further away than when you started out.

After the stimulus which began in a recession the first problem is reducing the wasteful stimulus expenditures. That will take you back to where you already were when the stimulus began. Then, if you go from there, you might actually make some progress.

But if you think that the solution to a problem caused by wasteful public spending is more wasteful public spending, then I leave you to your gurus and the nonsense economics you can pick up for a mere $3.30 from any newsagent.