By Chris Lang
Trying to calculate how much avoided deforestation has been achieved through a REDD project is not an easy matter. In fact it’s impossible without knowing what would have happened in the absence of the REDD project.
“Offsets are an imaginary commodity created by deducting what you hope happens from what you guess would have happened,” as Dan Welch brilliantly put it five years ago.
This is known as a counterfactual, or a description of what did not happen, but what might, could or would happen under different conditions. Without this counterfactual version of events, it’s impossible to know how many tons of carbon were not emitted into the atmosphere and it’s therefore impossible to know how many carbon credits can be issued from a given REDD project (or from a national level REDD programme).
Fortunately, we have experts like Lou Verchot, Director of the Forests and Environment Programme at CIFOR, to explain how REDD is going to work. Here here is talking at Rio+20 in June 2012 (the video is posted below). First, Verchot explains, we need a reference point:
“Reference emission levels are the reference point from which you begin counting how much emissions reductions you may be able to achieve. Reference emissions levels set up a counterfactual of how much emissions levels would have been, would have occurred in the absence of activities to reduce the emissions. So, it’s a little complicated to set up because it is exactly that, it’s the counterfactual, it’s something that did not happen.”
A “little complicated”? That’s some understatement. The REDD that Verchot is discussing is a carbon trading scheme (otherwise why bother going to such extraordinary lengths to measure the carbon not emitted against a counterfactual guess of what would have been emitted?). REDD credits will be sold, allowing emissions from burning fossil fuels to continue somewhere else. The CO2 thus emitted will remain in the atmosphere for around 100 years. That means we need to know what would have happened, in absence of REDD, for the next 100 years as well, as knowing that the forest will actually remain standing for the next 100 years.
Verchot continues:
“But you also want to set it up in a transparent way that everybody can agree upon, so it has to be, it’s some part negotiation, some part technical determination of what would be a reasonable deforestation level, what would be a reasonable emissions level if you did not have deforestation or if you had business as usual with no reduction in deforestation.”
Verchot explains that the technical side involves looking at historical deforestation rates and the causes of that deforestation. Obviously, agreeing on what did not happen, but might have happened if conditions were different will take quite a lot of negotiation.
“The difficulty is really in this element that it’s counterfactual. It’s something that did not happen. So how do you, you know, under normal circumstances without much variation in the economy you can certainly expect the last five years to be a good predictor of the next five years. But nobody expected the financial crisis, for example. Nobody could predict these big changes, these inflexion points. And that’s where things get a bit difficult.
“You know if you go through a financial crisis and the returns to land decrease, you expect a strong reduction in conversion of forest area. If you get a price spike, if food prices go up, you would expect agricultural land to expand. and so predicting some of these flash points, or these points where there are major changes, that cause inflexions in your deforestation rate is very difficult.
“And that’s where things get a little complicated.”
There he goes again. A “little complicated”? The interviewer asks Verchot about how this can be addressed. And this is where it gets really scary. Not only do we not know what might happen in the future, we don’t really know enough about what happened in the past either, because, as Verchot points out, many countries, “don’t have good deforestation data”. By now the whole idea selling carbon credits is looking pretty flaky. If Verchot is fazed, he doesn’t show it:
“We’ve put together a stepwise approach. And the stepwise approach really is data driven. One of the problems is that we are trying to do this in countries that don’t have good deforestation data. They don’t understand how much, they don’t have good data on how much forest they currently have, what had been the deforestation rate. So we lay out some ways to get started. There are datasets out there that everyone has access to, they have problems, everyone knows what the problems is, but at least they are datasets and they are consistent over time.”
Consistent over time? Wrong, but consistently wrong. But when you’re comparing the data to a guess of what might have happened in order to generate an imaginary commodity, that’s the least of your worries.
“You can start with that and set out your reference emissions level. But then what we did is we laid out pathways to prove that. So how could we go about improving the estimates of the area that’s been deforestation. Through satellite imagery, through non-spatially explicit to more spatially explicit types of approaches. How would we go about improving our emissions estimates, because you know it’s not just about the area that’s deforested, it’s about how much carbon is held in the trees on those areas that’s deforested. And that varies. A forest on top of a mountain, or a forest that’s at sea level, a tropical rainforest or a dry forest all have different carbon contents. So we set out a pathway of how do you go from very basic information to getting more complete information so that you get better estimates, more accurate estimates and less bias. So you can improve the precision as well as the accuracy.”
There’s another counterfactual to be considered. What would happen if the finance for REDD didn’t appear? Given that the amount of carbon not emitted under a REDD programme is anybody’s guess and that the UN-led carbon market is close to collapse as the Financial Times commented two days ago, things look pretty shaky.
“Right now everything is in the readiness phase, there’s no long term certainty of funding for REDD. That’s one of the things that our research is showing is a major impediment to REDD moving forward. Demonstration projects are being set up but they are not scaling up because they have no certainty of long term funding.
“The problem is of spending the money that’s available right now…. Countries are having problems spending the money that’s available right now. But they are having problems because they are not ready to invest in institution building, in capacity building, in some of these long term things that are required, because the long term certainty of REDD, of REDD financing, is just not there.”
Talking about the stepwise approach, Verchot notes that it has been agreed in the UNFCCC negotiations, at least in brackets.
“This is our suggestion, and something that we’ve actually demonstrated can work through our research…. Getting reference emissions levels set is the first step, because now a country can go to another country and say our reference level, we think our emissions levels are this and they are going to be this for the next five years. Let’s talk about what sort of compensation we can get and we can talk about what type of emissions reductions we are ready to commit to. So it sets the benchmark from which the international negotiations can happen. Both on the finance side, but also on the emissions reduction side. It’s the first step.”
Earlier this year, REDD-Monitor interviewed Frances Seymour, then-head of CIFOR. Seymour claimed that CIFOR did not take a position on carbon trading. Yet CIFOR has put forward a suggestion which apparently has been taken up by the UNFCCC for solving a problem that is only relevant to REDD as a carbon trading mechanism. That is taking a position.
I disagree wholeheartedly. RLs are critical outside of a carbon credit market. Even if REDD were financed completely by ODA, donor countries would want to know what kind of return they are getting for their investment. This is true for all development projects; in this sense RLs can be seen as one component of an M&E plan for aid projects aiming at reducing deforestation.
Furthermore Setting a RL or REL is a critical piece to understanding which policies and strategies are most effective in avoiding deforestation, giving us relatively objective empirical data.
I am curious, how do you know that the strategies you are advocating for are effective in reducing deforestation? Dont you think piloting this approach against a baseline is the best way to prove its efficacy?
I’d like to chime in to support Mr. Perkins, here, and I should apologize right off for getting a little social-sciencey in what follows. While the counterfactual part of emissions estimation is guesswork, we use counterfactuals all the time when trying to decide if something is worthwhile. Whether REDD is ultimately linked to a market – which, if it ever happened, seems like it’s still a long way away – we’d still like to have some idea what kinds of projects do things like reduce deforestation, improve livelihoods, etc. In order to do that, we need a counterfactual – we need some idea of what might have happened in the absence of the intervention in order to assess whether or not the intervention did anything. In the best scenarios, the counterfactual is generated by a controlled experiment, but a lot of the time, we tend to simply assume the counterfactual. Say, for example, that job creation slowed after Bush took office in the US. We can’t really say that Bush caused slow job creation without having some idea of what would have happened otherwise, although the incident might be discussed as Bush causing slow job growth. Unfortunately, the same thing is true of simply comparing historical versus contemporary deforestation rates. Doing so implicitly assumes the counterfactual that nothing changes, which may be just an inaccurate as a more complicated counterfactual generated by something like land-change modeler or GeoMod.
I think the counterfactuals in the case of REDD seem so complicated primarily because they need to be highly detailed. Of course, like you point out, it’s hard – maybe impossible – to tell how good our methods for generating counterfactuals might be, which should encourage caution in thinking about whether or not they should be linked to an offset market, which relies on very accurate counterfactuals. All that said, I think that having some kind of counterfactual is – for better or worse – inescapable. The more important questions, which you and Mr. Perkins both rightly point out, seems to be how to have the best counterfactual and how to know in what situations it is safe to use it – and when it should be avoided because the risks are too high.
@al perkins – Thanks for your comment. I agree with you in part. And I agree with Verchot that we need better data on deforestation. If we look at Brazil, for example, the country has pretty good data on deforestation. The data is transparent and presumably can be checked. We can see that the rate of deforestation has fallen dramatically since 2004. All this is good news, as long as it keeps going down. I agree that comparing current deforestation rates (i.e. with reduced deforestation policies) against historical deforestation rates is a good idea.
The counterfactual side of things is another matter. That’s just guesswork. And subtracting that counterfactual (what would have happened without REDD) scenario from the REDD scenario, in order to generate carbon credits is crazy.
Did the reduction of deforestation in Brazil happen as a result of REDD?
Supporting Mr. Perkins and Mr. Verchot here. RLs are relevant – how ever you finance forest protection. Also Mr. Gallemore is correct point out that we use counterfactuals all the time and in almost every part of life.
@Mr Lang: “Did the reduction of deforestation in Brazil happen as a result of REDD?” – No, I dont believe any serious person has claimed so. It’s been due to creation of ARPA protected area network, increased enforcement, different credit policies and of course global economic turn-down. The Brazilian Government would like to access to 1 billion USD from the Norway Amazon Fund for what it actually did in the past – but they dont want any credits to be traded for it. That is ODA-REDD, the problem is that one can question if the Norway billion will be spend for saving any other hectare of forest (“Additionality”) or Brazil will just cash-in now. This is why Reference Levels and pre-defined “payments on delievery of this goal” are essential.
Why do you continue to earn you living like you do? Because you feel you would probably lose money stopping doing it. But how do you know that after one month of unemployment you would not find a way of making more money or the same money with a third of the obligatory work? Counterfactual, you believe the trajectory that no better jobs are around easy for grabs.
It is my routine job to create RL or baselines for REDD projects and I do that sat image work Mr. Verchot mentions and I know the Braz deforestation data inside out – how it’s done, who it does, how accurate it is. (And it’s more or less accurate one might like to say).
The deforestation data gap is closing rapidly (CLASlite, FORMA, etc,etc). If you can not do it yourself you can contract an analyst for <20k USD and she or he will tell you the historic deforestation of any given area of the last 20 to 15 years using free NASA Landsat imagery. Accuracy is within reasonable limits of margins of error any scientific measurement.
Mr. Verchot correctly points out that even knowing the past you can not know if socio-economic currents remain similiar or arupt changes occur. Using the last-five or ten-year average is a conservative shot. Currently under VCS REDD projects are obliged to re-establish their baseline every 10 years. I would have no problem making 5 year evaluation obligatory. Currently the project is free to choose 20,15 or minimum 10 years of observation to get an average. If it is more valid to take a 20 year average for balancing out outlier years or only 5 years to grab recent developments – philosphical question with both pros & cons.
From your historic analysis you get for every year hectares of deforestation. Now, to most accurately estimate developments of the future – do you take the a) average annual loss of forest in hectares or b) a deforestation rate? In case of a) you dont grab the effect of every more people using ever less forest. in case of b) do you establish a deforestation rate from a reference year (e.g. 1 % per annum loss of 2000 forest area – which is actually solution a)) or do you calculate a running average deforestation rate (deforestation per annum is the % of forest from anterior year lost in observation year and than average over the years of observation) ? in the later case of b) you maybe get a (extreme example) 5 % pa.a deforestation rate which could I) be applied to a reference forest area (like end of your observation period) and get you to 100% deforestation in 20 years and 150% in 30 years which gets unrealistic or II) subtracting 5 % of always the anterior year of forest area will make your average forest loss ever smaller and you might lose completely socio-economic factors and more people using ever smaller forest area.
Personally, I prefer a) because deforestation is closely related to migration of new actors and population growth. if imigration and population growth remain stable you should have the same deforestation more or less every year – independent on forest area left. That applies mostly to very open forest frontier areas (Amazon). Where the forest frontier is closed and forest to be cleared limited land price and social norm constraints might kick in and a deforestation rate which decreases the yearly predicted area cleared might be more appropriate.
Found that over-technical and confusing? Well, this was the simple, conservative case. Problematic are modeled baselines where you try to figure in not only predicted future deforestation but predicted socio-economic changes from the past – maybe tendecies that have already begun but have not fully displayed their effects on deforestation in the historic analysis (e.g. new livelihoods, crops, clearing techniques). VM0015 of VCS allows for those modeled baselines. The problems of uncertainties and inpredictablities of future socio-economics and also limited and incomplete data on possible relevant factors or failure to detect relevant factors or enter insignificant ones into a model – well everyone who ran a model (and I did, on spatial distribution on mountain plants' habitat under various climate change scenarios) you know than uncertainties and errors simply multiply.
Most note-worthy on this topic was the presentation of Prof. Britaldo Soares-Filho (University Belo Horizonte / Minas Gerais). Although he does not like to hear it, some people find it fair to call him the god-father of future deforestation modeling. Wrote together with Nepstad the fundamental paper on Amazon future deforestation monitoring (SimAmazonia I) in 2006.
Well, in 2012 he said "Well, couple of years ago, we predicted the end of the Amazon. It turned out differently, as we all know [referring to dropped Amazon def rates]."
A linear historic deforestation average (2001-2005] would have been far less off in predicting the 2006-2010 observed deforestation than the complex model published. I do not have deep study on this, but conservatively I prefer 5 year average than complex socio-economic modeling.
If interested, mail me and I send you publications and presentations.
[email protected]
If the USA’s Lacey Act and its counterparts elsewhere – particularly the EU Timber Regulation in EU member states – were implemented as robustly as citizens have a right to expect, then substantial reductions in forest degradation and deforestation should be anticipated. Particularly so if citizens demand the enacting of similar legislation prohibiting – and requiring due diligence to prevent – the placement of other products deriving from former forest land on the market if those products’ supply chains are associated with illegality.
Counterfactual assessments which ignore this scenario would assume lack of political will / negligence by those in government in the USA, EU, Australia etc – an assumption which is probably right but must be wrong given the urgency with which anthropgenic greenhouse gas emissions must be slashed.