Last week the Ontario Municipal Property Assessment Corporation (MPAC) released the 2012 version of
their continuing study (following one in 2008) of wind turbines and property values in Ontario, entitled
Impact of Industrial Wind Turbines on Residential Property Assessment In Ontario. To sum it up, they
still find no evidence that wind turbines cause property value declines.
The study consists of a 31-page main section along with 12 appendices. MPAC seems to have their
language and it isn’t easily penetrated by a layman. I’ve read over it carefully several times and there
are still aspects of it that escape me. The appendices are generally beyond anyone who is not a
professional. On page 4 they state their goals for this version of the study:
Specifically, the study examined the following two statements:
- Determine if residential properties in close proximity to IWTs are assessed equitably in
relation to residential properties located at a greater distance. In this report, this is
referred to as Study 1 – Equity of Residential Assessments in Proximity to Industrial Wind
- Determine if sale prices of residential properties are affected by the presence of an IWT in
close proximity. In this report, this is referred to as Study 2 – Effect of Industrial Wind
Turbines on Residential Sale Prices.
Their two main conclusions, on page 5, are:
Following MPAC’s review, it was concluded that 2012 CVAs of properties located within proximity of an
IWT are assessed at their current value and are equitably assessed in relation to homes at greater
distances. No adjustments are required for 2012 CVAs. This finding is consistent with MPAC’s 2008
MPAC’s findings also concluded that there is no statistically significant impact on sale prices of
residential properties in these market areas resulting from proximity to an IWT, when analyzing sale
Actually, there are three parts to this study, with the third contained in Appendix
G. Early in 2013 one
Ben Lansink published a pretty solid study that showed property value
declines of anywhere from 22% to
59% and averaging about 37% on residential properties close (all within 1 km) to IWTs, which I
at the time. Apparently Lansink’s work was solid enough that MPAC felt obliged to attack it.
For me to critique all three parts would make for a very long posting, so I’m going to divide it up.
Obviously the details will follow in my subsequent postings, but for the impatient let me summarize
Part 1, are MPAC’s evaluations close to IWTs as accurate (equitable, in their words) as
those further away?
This section is only of tangential interest to me, as the central question isn’t MPAC’s accuracy, but
rather the effect of IWTs on prices. It seems that, given MPAC’s explanations, their appraisals are
accurate. Still, there are some items in this part that are of interest. For example, it seems that
MPAC has been playing games to get the appraisals to agree with the market while hiding the effect
of wind turbines. They studied turbines 1.5mw and larger, not older turbines and the areas in
Ontario where the impact has already been felt.
Part 2, do IWTs have an effect on properties closer to them?
This section is of central interest. Unfortunately there are only 5 pages in Part 2, leaving lots of
details missing. Things like the sales prices within the close-in areas. MPAC’s major tool for doing
mass appraisals (4.7 million in Ontario) is multiple regression analysis and we’ve had lots
ofexperience with how that can be manipulated to obtain the answer
sponsor wants. Instead of
providing us the prices and letting us judge for ourselves what any effects might be, they opaquely
run those prices through their regressions and voila! claim there’s nothing to see here!
But whoever wrote Part 2 must not have been talking to whoever wrote Part 1. On page 18, well within
part 1, there’s Figure 2. It’s purpose there is to show how close the appraisals are to the sales
data (the paired blue and green bars) for the different distances from the IWTs.
Note the blindingly obvious. Prices (and appraisals) within 5 km of IWTs are substantially lower than
those further away. I’ve added the horizontal lines so we can better determine the values, which are
noted to the side. Michael McCann, among others, has done a number of studies on IWTs and prices,
and his overall conclusion is a decline of 25-40%, with
almost 100% in
some cases. Does anyone want
to calculate the decline from 228,000 to 171,000? Perhaps the disparity is due to something as
simple as the spread between rural and urban properties, but don’t you think MPAC would at least
mention something? Nope. Nada.
Part 3, what are the problems with Lansink’s study?
G is more or less readable and provides an excellent example
what David Michaels book, Doubt is Their Product, talks about. MPAC throws up, by my count, 7
objections to Lansink’s methodology; of which exactly zero actually indicate that Lansink’s numbers
are wrong. Sewing confusion seems to be the most logical explanation. As an example, objection #4 of
the 7 is that for some of the pre-IWT prices Lansink used, gasp!, MPAC’s own appraisals. Perhaps
whoever wrote Appendix G didn’t bother reading the conclusions in Part 1.
There’s more details, of course, in the following postings.
Critique of Part 1
Part 1 of MPAC’s 2012 study asks if MPAC has as equitably assessed properties close to IWTs as
properties further away. This part, although of only tangential interest to wind opponents like
occupies the central part of the entire study. We think the larger question is: do IWTs reduce
values, not whether MPAC is clever and honest enough to correctly recognize those reductions.
MPAC is in the business of mass assessments, nearly 5 million in Ontario. Given this volume they
choice but to use computers and computer-friendly techniques to do their assessments. That
a significant reliance on multiple regression analysis. They determine what sorts of characteristics
influence the selling prices and then use the computers to find out how much influence each
characteristic has. In their experience, 85% of the selling price can be calculated using 5
characteristics, or variables: location, building area, construction quality, lot size and age of
home adjusted for renovations and additions. Note that distance to a wind turbine is not one of
characteristics and MPAC seems determined to keep it so. But also note that location could be used
lieu of distance – more on this later.
MPAC uses the ASR, Assessment-to-Sales Ratio, to determine if their assessments are accurate. It is
simply the assessment divided by selling price, with a ratio of 1.0 being a perfect match. MPAC
ratios between 0.95 and 1.05, and presents what seems to be an endless series of charts
this, primarily in the appendices. While obviously MPAC (actually everyone) has an interest in
their emphasis on it seems misplaced in a study entitled Impact of
Wind Turbines on Residential Property Assessment In Ontario, which to me and most residents is
a different question.
Just think of the ramifications if MPAC decided to include distance from an IWT in their
have little doubt it would make Ontario’s lawyers very happy. It would also put Ontario’s
ruling party in a difficult political spot. And don’t forget that the board of MPAC is appointed by
Minister of Finance, who is a member of the ruling party’s cabinet.
Upstream I mentioned that MPAC could use the location variables that already exist in their
to finesse their way out of this problem. I point to Wolfe Island as an example of how this might
The western half of WI is now home to 86 IWTs, a project that had been in development since roughly
2000. If this half constitutes a “neighborhood” then MPAC could reduce the values in that
in a uniform manner and never have to recognize the elephant in the room.
As it happens, I posted on
MPAC’s actions on Wolfe Island about 18 months ago. In the 7 years when the wind
went from being
developed to operational, the roughly 700 properties on Wolfe received the following number and
- 2005/06: 130, 9.3%
- 2006/07: 33, 15.2%
- 2007/08: 12, 28.8%
- 2008/09: 34, 12.4%
- 2009/10: 44, 29.0%
- 2010/11: 22, 30.0%
- 2011/12: 27, 24.0%
That’s a total of 302 reductions, which seems like a rather large percentage of the properties
A Wolfe Island couple, the Kenney’s, asked for a reduction which they say MPAC was willing to grant,
although MPAC wouldn’t let IWTs be used as the reason. It ended up in court, and a local paper had a
reasonably good account of it. Perhaps MPAC’s reluctance to admit the obvious is that once they
they must then include distance in their regressions and doing that (and the legal and political
repercussions) is just too unpleasant. So they limp along, using the location instead.
Their favored overall chain of logic seems to be: since the ratios in neighborhoods close to IWTs
much different from those further away, and since those ratios indicate their assessments are
and since MPAC doesn’t include distance to an IWT in their regressions, ergo distance from an IWT
a factor in reducing values. Part 1 of this study is a necessary part of this chain. So the real
purpose of this part of the study (and the study as a whole) seems to be to publicize MPAC’s skills
keeping the assessments in line with reality, and at the same time deflect how MPAC is going about
this. MPAC is, after all, in a tight spot. The reality is that home prices take a dive when close to
IWTs. MPAC somehow has to lower the assessments around IWTs to keep the ASRs in line while keeping
Critique of Part 2
I fear that this part will be a difficult one for most people to follow, not to mention being
lengthy. Feel free to skip it. But I think it is important to document what this Study contains, and
MPAC made no effort to make understanding it easier. I recommend you print out Study 2′s 5 pages (pdf pages 26 to 30) and have them at hand
as you read
The purpose of Study 2 is to
study the effect of proximity to industrial wind turbines on
residential sale prices. In summary, Study 2 finds that
With the exceptions noted above, no distance variables entered any regression equations for any
of the other market areas
. Say What?
It seems that people who are in the business of estimating real estate prices tend to fall into one
of two camps. First are those who make their living providing services to the people who actually
own the properties, with real estate brokers being the most obvious examples. These people tend to
focus on one property at a time and generally use comps or repeat sales to obtain their estimates.
Second are those who make their living providing services to people who don’t actually own the
property. Academics and mass appraisers (like MPAC) are the most obvious examples. These people tend
to focus on many properties at a time and generally use statistical techniques like multiple
regression analysis to obtain their estimates. The second class tends to think in terms of rejecting
the null hypothesis – you assume there is no difference between two sets (in this case close-in
prices and far-away prices) unless you have “statistical significance”. As a snarky aside, getting
to statistical significance in real estate can be quite a challenge, given the wide variance among
prices, and can be even more difficult when your sponsor/boss doesn’t want you to do so.
So of course MPAC used their main tool, regression equations that run multiple regression analyses.
They created three new variables based on distance from an IWT and entered these into regression
equations to see if the new variables were statistically significant. If they aren’t statistically
significant they don’t “enter” into the regression equations. As for the exceptions (which we’ll get
to shortly), out of 30 possibly significant variables, only 4 were significant and 3 of them were
So right off the bat MPAC is using a tool that doesn’t provide the answers the actual owners of
potentially affected properties really care about. A binary statistical significance indicator does
not provide an answer to the “how much” and “how likely” questions a homeowner is going to have. In
this case, MPAC has skipped through the study so opaquely that I can’t even have much confidence in
my critique. There’s just too many omissions, too many unexplained leaps, too many dangling
There are just 5 pages in Study 2. The first of these (page 25 of the
study) lists the three new distance variables and sets their criteria for statistical significance
at either 5% or 10%. For those unfamiliar with that concept, the significance is a measure of the
odds two populations are in fact just randomly part of the same larger population.
In this case, a 5% significance means that there is only a 5% chance that the prices of the close-in
homes are the same as the far-away home prices. In other words, there’s a 95% chance that the
close-in prices are different from the far-away prices. What if there’s only an 80% chance your home
value will drop? Not significant, from MPAC’s perspective.
The second page (page 26) is dominated by Table 9. For MPAC’s purposes
Ontario is divided into 130
“market areas”. These areas presumably have some common basis that allows them to be treated as a
unit for their regression equations. Unfortunately I couldn’t find where the areas were or how many
homes were in each. Of the 130 MPAC found 15 that had large enough turbines in them to be of
interest. These 15 are listed in Table 9, along with the numbers of sales within each of the 3
distance variables for both pre-construction and post-construction. MPAC didn’t bother adding them
up either horizontally or in total, but I did. The numbers inside the grid add up to 3136, which
would be the total sales within 5 km in all the areas. But if you add up their numbers along the
bottom you come up with 3143. It turns out that their 142 should be 139 and their 1584 should be
1580. Now this isn’t much of an error, except that any pre-teen with a spreadsheet and 10 minutes
wouldn’t have made it.
At the bottom of page 26 they introduce pre-construction and post-construction periods, and that
only two of the 15 have enough sales to test both distances and periods. Most of the remaining 13
have “sufficient sales within 1 KM to test the value impact within that distance”. Also that the
“sales period to develop valuation ranges from December 2008 to December 2011″. And that Table 10
provides a summary.
The third page (page 27) is dominated by Table 10. It lists the remaining
10 market areas that
presumably have “sufficient sales within 1 KM to test the value impact within that distance”. 2 of
these have enough sales to test both distance and periods while the other 8 have enough sales to
test just the distance. For each of the 10 areas MPAC list square footage etc and median adjusted
prices. Are these the prices for the entire area or just within 1 km? MPAC doesn’t say. What is the
criterion for “sufficient”? MPAC doesn’t say. Nor does MPAC include what should obviously be
included – both tables. I suspect they are for the entire area, in which case they are useless for
our purposes, at least without the close-in comparison.
Presuming the criteria for inclusion into Table 10 is the 1 km test mentioned on page 26, one has to
wonder how 26RR010 and 31RR010 got into it, as Table 9 shows they had zero sales within 1 km. Snark
alert – maybe the missing 7 sales from Table 9 took place in these areas? And if 1 km isn’t the
criterion, what is? MPAC never says.
At the bottom of page 27 they mention that some sales at the 5 km distance were in urban as opposed
to rural market areas and thus were eliminated. They don’t say how many, nor what their effects on
the regressions might be. They also reiterate their statistical significance levels.
On the fourth page (page 28) they present two more tables, 11 and 12.
Table 11 lists the 8 market
areas that had sufficient sales (within 1 km?) to test the distance variables while Table 12 lists
the 2 market areas that had sufficient sales to test both distance and periods. These tables made
absolutely no sense to me until I noticed Appendix F.
For all 10 areas they entered the 3 distances and ran their regressions. In Appendix F they list all
the “excluded” variables, in this case all the distance-related variables that didn’t get to
statistical significance. They apparently are called “excluded” since, being “insignificant” they
don’t enter into MPAC’s final pricing calculations. If you look at the “sig” column you will not see
any value less than .100, or the 10% significance level MPAC mentioned on pages 25 and 27. I assume
by omission (and that’s all I can do here) that any of the 3 distance variables that are NOT listed
in Appendix F are in fact significant.
On my first pass through Appendix F I came up with 6 omitted, and thus assumed significant,
variables. Two of the omissions were for zero sales, for areas that shouldn’t even be there by the
<1 km criterion. But, maybe the <-1 km variable was never even entered on the exclusion
listing in Appendix F, so maybe I had erroneously assumed it was not excluded when in fact it didn’t
exist in the first place. So maybe the criterion for inclusion in Table 10 wasn’t significant sales
less than 1 km, but rather significant sales less than 5 km out. Just a typo, right? At least Table
11 now is consistent with Tables 9 and 10.
Finally! Out of the 30 tests (10 areas times 3 tests) I count 4 that are significant. Those 4 make
up the “non-DNE” entries in Tables 11. MPAC provided absolutely no guidance or explanation about any
of this, apparently writing for a very small audience.
I can only guess that the dollar amounts in Tables 11 and 12 are the effects of being in those areas
upon the prices. So, in the Kingston area (05RR030), if you live within 1 km of an IWT, you can
expect the value of your home to increase by $36,435! Very impressive – 5 digit accuracy, especially
with a sample size of 7.
Finally, thank goodness, we come to the fifth page (page 29). It is the
Summary of Findings and
contains more words than the rest of the Study put together. This section mostly lists the
significant variables and adds some fairly cryptic commentary.
As I read through and dissected this Study I couldn’t escape the sense that MPAC didn’t want to put
much effort into it. Any narrative or explanations or even public-friendly conclusions are absent.
The tables that are included are ok, once you take the time to figure them out, but what about all
the stuff they should have included but didn’t? Things like the median prices in the areas
represented by the 30 variables. Or an Appendix F1 that shows the included variables, allowing us to
see the t-scores etc for ourselves. Etc., etc.
These missing items cause this Study to be terribly opaque. I hope my explanation above is accurate,
but I can’t be sure due to all the missing items. Maybe the Study reaches valid conclusions, but I
sure can’t verify that. Perhaps MPAC thinks we should just trust them to be an honest pursuer of the
truth. Sorry, that no longer flies, if it ever did. You have to wonder, is there some reason other
than laziness or stinginess that this Study seems so empty? In addition to the opacity the Study
includes several cryptic items that MPAC never explains. For example, from the summary, what do
these sentences actually mean?
Upon review of the sales database, it was determined that the IWT variables created for this study
were highly correlated with the neighbourhood locational identifier. This strong correlation
resulted in coefficients that did not make appraisal sense, and thus have been negated for the
purposes of this study.
If you look at the excluded variables in Appendix F you notice that most of them are named “NBxxxx”.
Probably those are neighborhood identifiers the somehow overlay the market areas. MPAC never
mentions how many there are or what the criteria are for forming one. But pretty obviously the areas
around an IWT could easily coincide with their neighborhoods. So what gets negated? Some of the
coefficients? All of them? MPAC provides no further information.
As an aside, I found it interesting to scan over the other excluded variables to see what sorts of
things MPAC puts into their regressions. Many of them make no sense and they seem to vary greatly
from market to market. I can’t help but think of a bunch of regression-heads sitting at their desks
hurriedly making up variables and desperately running regressions in an effort to get the ASRs
closer to one (ASRs are covered in Study 1).
I’ll leave (thankfully, believe me) this Study behind with the final thought that it seems so
slapped together, so opaque, so disjointed that perhaps even MPAC themselves weren’t sure what
significance it holds. Unfortunately, the wind industry won’t care about any of that, and will use
this study to continue harming Ontario residents.
Critique of the Lansink hatchet job
Ben Lansink is a professional real estate appraiser based in Ontario. In February 2013 he published
a study of two areas (Melancthon and Clear Creek, Ontario)
where 12 homes all within 1 km of an IWT
were sold on the open market. He used previous sales and MPAC assessments to establish what the
prices were before the IWTs arrived and then compared that with the open market prices after they
went into operation. The declines were enormous, averaging above 30%. The following (thankfully
clickable) spreadsheet snapshot gives a good summary of his results.
In quite a departure from MPAC’s style, Lansink lists every sale, every price, every time-related area
price increase rate and every source. Lansink establishes an initial price at some time before the IWTs
were installed, applies a local-area inflation rate over the period between the sales, and compares the
“should-have-been” price with what the actual sales prices was after the IWTs were installed. In all 12
cases the final price was lower than the initial price, leading to an actual loss on the property. When
the surrounding real estate price increases were factored in, the resulting adjusted losses are even
greater. The compulsive reader might notice that the numbers above vary slightly from Lansink’s. In
order to check his numbers I reran all his calculations in the above chart and there are some rounding
errors – like on the order of <$10. I posted
on Lansink’s study when
it came out, along with a
second posting on a previous version of his study.
These numbers are pretty easy to understand, and for most actual property owners are a
hard-to-refute indication of what awaits us should we be unfortunate enough to own property
within 1 km of an IWT. It is powerful enough and inconvenient enough that MPAC felt the need to
single it out for a hatchet job, which is contained in the 7 pages of
Appendix G. The first
couple of pages are introductory stuff. Starting in the middle of page 2 they start their
critique with, by my count, 7 issues with Lansink’s methodology. The 7 are:
Lansink uses the local area MLS price index in calculating the inflation rate. MPAC points
out, correctly I guess, that within the MLS local area there could be neighborhood variances
that could differ from MLS’s area average. MPAC has lots of neighborhoods defined (see
Appendix F for a sampling) and it would be more accurate to use them. While more discrete
data is generally a good thing, I think most people are quite willing to accept the local
area MLS price index as a reasonable proxy. Besides – how would Lansink obtain MPAC’s
neighborhood data? He used the best that he had, and that best is no doubt good enough for
everyone besides MPAC. As you increase the number of neighborhoods you necessarily decrease
the number of homes in each, increasing the chances of distortion by a single transaction.
Issue #5 below will mention this as a problem from the opposite direction. No doubt if
Lansink would have used neighborhoods MPAC would be criticizing him for not using the more
reliable area average. Additionally – how far apart could a neighborhood be from the local
area average? Does MPAC provide any indication that this caused an error in Lansink’s
conclusions? Of course not.
- Lansink used just two points to “develop a trend”. I have no idea what they are talking
about. Lansink is not developing any trends. As with neighborhoods, MPAC has more discrete
timing adjustments than what Lansink used. In theory, more discrete data might be more
accurate. In practice, maybe not, due to outliers. A monthly MLS area average is good enough
for, again, everybody but MPAC. Additionally – how far apart could their timeline be from
the local area average? Does MPAC provide any indication that this caused an error in
Lansink’s conclusions? Of course not.
Two homes in Clear Creek have their initial and final sales 8 and 15 years apart and there
was likely something changed in the interim, affecting the price. People are always doing
things to change the value of their homes – does MPAC have any indication that something
substantial changed in one of these properties? If not, this is simply idle speculation,
designed to instill confusion. Does MPAC provide any indication that this caused an error in
Lansink’s conclusions? Of course not.
For the other 5 home in Clear Creek Lansink used MPAC’s 2008 evaluations as the initial
price, and MPAC is complaining about that. MPAC is apparently unaware of how ironic this
sounds. They just finished, in this very study, bragging about how close their ASR’s were to
one. Does MPAC provide any indication that this caused an error in Lansink’s conclusions? Of
For the properties in Melancthon Lansink used the buyout prices from CHD (the wind project
developer) as the initial prices. To confirm these prices were at least in the ballpark of
local market prices he obtained a local per square foot average price and it compared
favorably with the prices paid per square foot by CHD. Since there was only 4 samples in
this part of his study, even one outlier becomes a possible source of distortion and this is
one of MPAC’s “major concerns”. This seems an odd criticism, coming from someone who relied
upon the data in Table 9, with its fair share of single-digit samples. Does MPAC provide any
indication that this caused an error in Lansink’s conclusions? Of course not
MPAC found one house with a basement and since footage in basements is treated differently
from footage above ground, this would have changed the square footage price used by Lansink
in his comparison with the local average. Since there are only 4 houses in this sample, it
would have moved the average up. MPAC spends the bottom of page 2, all of page 3 and part of
page 4 discussing basements and whether they are finished or not. Does MPAC provide any
indication that this caused an error in Lansink’s conclusions? Of course not.
I’ll quote issue #7 in its entirety so you can fully appreciate it. “One final issue with
the sales used in the Lansink study was that the second sale price was consistently lower
than the first sale price despite the fact the time frame being analyzed was one of
inflation. The absence of variability in the study make them suspect.” Suspect? THESE ARE
PUBLIC RECORDS. There’s nothing suspect about them. These are facts. They won’t change. If
they don’t fit your narrative perhaps your narrative needs to change, eh? Does MPAC provide
any indication that this caused an error in Lansink’s conclusions? Of course not.
These 7 issues are an excellent example of spreading confusion, hoping that some of it will
stick, saying whatever you can come up with to discredit an opponent. When you’re reduced to
spending over a page discussing basements it provides an idea of just how desperate you are.
The second part of MPAC’s critique involves them running their own study of resales to see how it
with Lansink’s. They find 2051 re-sales that were part of this same study’s ASR calculations (in Study
They use their more discrete time variables in place of Lansink’s MLS local area averages. They use
regression analysis because
Paired sales methods and re-sale analysis methods are generally limited
to fee appraisal and often too tedious for mass appraisal work.Their conclusion:
Using 2,051 properties and generally accepted time adjustment techniques, MPAC cannot conclude any
loss in price due to the proximity of an IWT.
In spite of the voluminous tables and examples, MPAC leaves some very basic questions unanswered. Like
where were these 2,051 properties located and how were they selected? There’s no mention of them in the
body of the 2012 study. Over what period were the resales captured? What were the prices of the close-in
re-sales vs the far-away re-sales? Lansink has documented 7 losing resales within 1 km – why does your
summary say zero?
MPAC has this habit of expecting us to be impressed with large amounts of data, without divulging where
it came from and what filters might have been employed. Same with throwing all these numbers into a
computer and expecting us to uncritically accept the output. In short, MPAC expects us to trust them to
be fully honest, fully competent and fully independent. I hate to be the bearer of bad news to the fine
folks at MPAC, but that trust is no longer automatic for increasing segments of Ontario’s population.
Lansink’s numbers are out in the open and are processed in a way that anyone can verify. Your numbers
suddenly appear and rely upon computers with undocumented processes that always support the agendas of
your bosses. Your methods may be satisfactory to some media, some politicians, some courts and all
trough-feeders, but please don’t be surprised that they are not satisfactory to those of us living in
End Wayne Gulden Report