Rarity and conservation status: Hands on experience conducting ecological research

We, the Biodiversity Critical Thinking tutorial group (a.k.a. Team Isla), decided to undertake a mini data synthesis project at the end of our Critical Thinking Tutorials to test the research question: “Are rare species more likely to be declining in abundance than common species?”

This project is a learning experience in how science really works and a chance to test a really cool question, that we don’t think has ever been explicitly tested in this way before.

STEP 1: Coming up with our research question

We began by each suggesting a question in ecological or environmental science we were interested in investigating, these included:

  1. Rewilding: where has it been done? Is it realistic?
  2. Are native species better at providing ecosystem services than non-native? Do native species have higher productivity?
  3. Antarctic – is it the world’s largest biodiversity cold spot? Is biodiversity changing faster there and could it be a new hotspot one day?
  4. Home made electricity – what is the potential? Is it cost effective? What are the options? Could we look at an already self-sustaining village (e.g., Findhorn) and figure out the best way to make a difference in Scotland?
  5. How do current protected area maps overlay with future species ranges and climate models?
  6. Are rare species really common and common rare? What does this mean for conservation?

By a voting elimination we decided on idea number 6: Are rare species really common and common rare?

Brainstorm

We had lots of fun creating our brainstorm from which we sprouted our ideas.

Brainstorm2

This only captures some of the wonderful discussion that was held.

STEP 2: Defining our methods and collecting our data

First we discussed what rare versus comment meant and decided that rareness versus commonness relates to some combination of the following things:

  1. The geographic range extent
  2. The local abundance
  3. Habitat specificity

We agreed that there were ways we could explore the first two components of rarity/commonality using available data for UK species.

We also discussed how we wanted to link rarity to conservation status and that population data over time, would allow us to calculate whether a species has a declining, stable or increasing population – an indicator of whether a population is threatened.

We started out by downloading the Living Planet Index (LPI, http://livingplanetindex.org/data_portal) data of populations for species all around the world.  We then subsetted out only the species that have population records from the UK.  These 211 populations will be the basis for our data analysis.  Then we decided we could use the Global Biodiversity Information Facility (GBIF, http://www.gbif.org/) to download occurrence data for all of these species and then use those occurrence data to calculate the geographical range extent for these species.  We can also use their population numbers and the number of occurrence records to estimate their local abundance.

We encountered some difficulties with downloading data from these data portals including formatting issues with the data, multiple data sets for the same species or for subspecies within the same species, inconsistent taxonomy, and more!  But, with a bit of R code and some patience, we got everything sorted!

At moments we felt a bit like we were in the scientific cloud – but I think we will make it out the other side.

https://www.ted.com/talks/uri_alon_why_truly_innovative_science_demands_a_leap_into_the_unknown?language=en

STEP 3: Testing our hypotheses and writing up our results

With much excitement and two pieces of R code and many many .csv files of data from the Global Biodiversity Information Facility and the Living Planet Index we came together for our last Critical Thinking Session to analyse our data and test our hypotheses.

Here are our research question and hypotheses that we clarified in our last session.

Research Question:

Are rare species (those with smaller geographic ranges and lower local abundance) more likely to have declining population trends than common species?

Hypotheses:

Range extent negatively correlates with rate of population change

H1: As geographic range increases populations will have a lower rate of population change.

H2: As geographic range increases populations will have a higher rate of population change.

H0: The rate of population change will not vary with geographic range size.

Now that the data were in hand, our methods were finalized and our hypotheses clarified, we were ready to begin to “unwrap our data present”, the idea of finding out the exciting result from a previously unanalysed dataset – a term coined on TeamShrub, Isla’s research group.

But wait, there was one problem!  We hadn’t actually merged our LPI slope and GBIF range size data, this should be a quick fix we thought, but that is where we were wrong.  We had downloaded the LPI data using the common names for the species and the GBIF data using the Latin names.  All we needed was a key to link the two, but in the process of setting up our key, we did something wrong and the two datasets just wouldn’t merge.  Sadly, we learned an important lesson about using a common taxonomy and we used up the remaining minutes of our session trying to finish the analyses and not getting to the final data present unwrapping, but I guess this is a very realistic real-life experience of how science goes sometimes!

Frustratoin

After the session and over e-mail we got the merging problem fixed and the data present is now unwrapped and the results are very interesting!!!  Stay tuned for the write up of our mini science project to discover what we found out about populations of common versus rare species in the UK…

by Isla and the Biodiversity Critical Thinking Group

Advertisements

A climate for geoengineeing? And the “very worst” case for net emissions…

Since the end of last term COP21 happened. Biosphere management emerged as a theme in discussion of carbon emissions in Paris. Did this come under the category of Geoengineering and carbon dioxide removal technologies? What role might Geoengineering eventually play and what issues arise? Matthias investigated these questions in the wider context of the global commitment that was made. Jakub meanwhile considers what would happen to climate if we only burned through our entire fossil resource and did nothing in mitigation.

By Matthias: There has been a number of climate negotiations before, why is this one being celebrated and perceived by many as a great success. Well, it is certainly quite unique and impressive that not a single country has refused to sign the agreement, which would have been enough to throw the whole agreement overboard. The reason for the Paris agreement to be signed by each country, is the fact that this time national contributions to restrict climate change will not be imposed upon countries, but they get to choose how much they contribute themselves.

So what is actually legally binding?

There will be a legally binding annual $100bn fund from developed economies to help developing/emerging economies to diversify their energy mix with renewables.
Also, each country has to submit a national reduction target and the regular review of this target will be legally binding too. However what is heavily criticised by many is that national emission reduction targets will be determined by nations themselves.
So what is being celebrated is that finally for the first time in the history of climate conferences, the world has collectively agreed upon the goal of restricting global warming to a 2°C maximum by 2100. The attempt to impose binding targets on countries was the major reason for previous climate negotiations to fail. There will certainly be new talks and reviews the next five years.  A depressing side note: the 100bn will still be less than 8% of worldwide declared military spending this year.

Is Geoengineering probable in the light of the talks?

The deliberate large scale manipulation of the planet to counter the effects of climate change has long been a taboo subject, but in recent years more voices say that there won’t be a way around it. The concept of Geoengineering has been around since 1965 but was not promoted much by scientists, since many believed it could divert attention from governments to prevent / mitigate climate change in the first place. I personally think that Geoengineering will certainly not take place within the next 40 years, since people are rightfully worried about the unknowns. However, if the biosphere gets to a critical stage some will probably resort to geoengineering, but the question will be which country will be in charge. Since actions to cool the earth in some region can have serious consequences in other regions. This is why I think atmospheric measures (i.e. aerosols, or reflectors) will be more problematic and hence unlikely than pumping liquefied CO2 back into the earth.

References: Gunther M. 2009. Suck It Up: How capturing carbon from the air can help solve the climate crisis, 51pp;  BBC news coverage of COP21 and details of the deal on the BBC website

 

And with ALL the organic carbon sequestered in the Earth’s crust back in the atmosphere…

Bu Jakub: The amount of carbon emitted through fossil fuels has an influence on greenhouse effect. Since the onset of industrial era the humankind emitted significant amount of carbon into the atmosphere which has increased average global temperature by approximately 0.85C[1]. This is almost half of the internationally accepted increase of 2C. Such ceiling on increase in average temperature is thought by experts to preserve the integrity of most ecosystems, limit the increase in the ocean levels to protect vulnerable coastal and island countries from drowning and other negative impacts on humankind.
What would happen if all fossil fuels were burnt and total of 5 trillion tonnes of carbon would be emitted to atmosphere?

According to scientists, such emissions would cause global average temperature to rise by 6–11C, with warming more pronounced in Arctic (15–19C)[2]. Although not immediately, subsequent melting of world glaciers would cause rising of global oceans by over 50 metres over next centuries[3], an event which would drown major cities in the world.  Increased temperatures would also result in inability to cultivate most of the grains common in our diet and threaten many terrestrial and marine animals. For example, large proportions of coral reefs worldwide which provide habitat to majority of marine animals are under greater risk of bleaching due to climate change[4]. Local weather patterns would be amplified with rainy areas experiencing more rainfall and arid areas suffering severe droughts. Increased atmospheric concentration of carbon would also result in acidification of precipitation and oceans, which would contribute to decrease of both plant and animal diversity[5,6].  Eventually, burning all fossil fuels would make majority of the planet uninhabitable by humans[7].

1. http://svs.gsfc.nasa.gov/vis/a000000/a004100/a004135/index.html
2. http://www.bcsea.org/events/how-much-would-five-trillion-tonnes-of-carbon-warm-climate
3. http://advances.sciencemag.org/content/1/8/e1500589
4. Iglesias-Prieto, R., Matta, J. L., Robins, W. A., & Trench, R. K. (1992). Photosynthetic response to elevated temperature in the symbiotic dinoflagellate Symbiodinium microadriaticum in culture. Proceedings of the national Academy of Sciences, 89(21), 10302-10305.
5. Dubinsky, Z., & Stambler, N. (Eds.). (2010). Coral reefs: an ecosystem in transition. Springer Netherlands.
6. Wei, G., McCulloch, M. T., Mortimer, G., Deng, W., & Xie, L. (2009). Evidence for ocean acidification in the Great Barrier Reef of Australia. Geochimica et Cosmochimica Acta, 73(8), 2332-2346.
7. Sherwood, SC, and Huber, M. An adaptability limit to climate change due to heat stress. Proc Natl Acad Sci USA 2010, 107:9552-9555.

Moving ahead

Using satellite sensors to quantify components of the biosphere and their activity is now well established. A news story last year used ground survey to validate a new estimate of the global tree population, which leads to questions about how changing age distribution and morphology may affect future ecosystem function. Lidar has been much talked about as a way to remotely examine and quantify the architecture of at least tree canopies. Alex looked into how the technology has advanced in ecology and also how it has been used more generally.

Shining a new light on our 3D world

Visualising the earth by harnessing the electromagnetic spectrum we know matter emits, reflects, transmits and absorbs. Remote sensing (RS) technologies bring major advantages over ground-based science: global coverage, single instruments and procedures, continuous measurements, for example.  Light detection and ranging (Lidar) is an active RS technology. The RS platform emits light energy in short pulses to measure physical properties of the Earth’s surface and atmosphere based on illumination and reflection. The length of time between the lidar pulse and returning echo is equivalent to distance to the target. This information can provide the building blocks for 3D models of surfaces across the globe to high spatial resolution. The strength of the echo (amount of returning light) can be related to concentration, for example, the concentration of air molecules in a column of atmosphere below. This example can go one step further in its analysis and separate the returning echoes according to the calculated distance from the sensor. Atmospheric density at any given point in the column is therefore attainable. This is only the tip of the iceberg when it comes to current Lidar applications ranging from meteorological measures of cloud cover to mapping geomorphology of entire landscapes.

Novel applications of Lidar that I bet you didn’t know about…

Google’s self-driving car makes use of a Lidar comprising of 64 lasers to position itself within the urban environment. Crash prevention is of key importance and relies entirely on locating surrounding 3D structures.

Lidar

Radiohead produced the first music video (house of cards) constructed entirely using geometric informatics systems and Lidar scanning systems to map people and environments in 3D.

Lost City in Honduras found! Even archaeologists use it. There is an ongoing race against time in Honduras to seek out buried ruins now threatened by deforestation. “Lidar is able to map the ground even through dense rain forest, delineating any archaeological features that might be present” (National Geographic, 2015). The discovery of the City of the Monkey God has no doubt fuelled further excitement surrounding Lidar!

Monkey god

Are we in a mass extinction?

This week’s discussion focussed on mass extinctions, specifically asking whether or not we are experiencing a mass extinction right now.

Firstly, it became clear that the term ‘mass extinction’ is not easily defined. Barnosky et al. (2010) define a mass extinction as a loss of >75% of species over a ‘geologically short interval’ (<2 million years). In contrast to Barnosky et al.‘s rigid definition, Sepkoski (1986, p.278) defines a mass extinction merely as a substantial increase in the amount of extinction in more than one geographically wide-spread higher taxon during a relatively short interval of geologic time. Many other definitions exist, these two are merely two ends of a broad spectrum.

 

extinctions

Figure 1 – Adapted from Rohde & Muller (2005). Showing the placement of the five previous mass extinctions and their effect on biodiversity loss in terms of number of genera.

Previous mass extinctions occurred on a time-scale of many millennia. The most dramatic extinction, which marked the Permian-Triassic boundary, destroyed an estimated 90% of life on Earth but took 60,000±48,000 years to accomplish it. 60,000 years is far longer than our records of biodiversity change and so it is difficult to decide what the long-term trends of future biodiversity loss may be. Instead we rely on the fossil record, but fossils are rare and biased towards those species which possess hard fossilisable body parts and are found in environments where fossilisation is a possibility. Elevated trends seen by some researchers may merely be temporary blips that will resolve quickly, or they may indicate a trend that will continue.

Linking back to our earlier discussions on the theme of how biodiverse the Earth actually is and how much we know about those species, we talked about how measures of biodiversity loss are skewed towards charismatic terrestrial vertebrates.  Understudied taxa such as bacteria or micro-arthropods may be going extinct faster than other taxa. Regnier et al. (2015) demonstrated from a random sample of terrestrial gastropods that we may have already lost 7% of known species, while only 0.04% of species have been officially recorded as extinct.

Barnosky et al. (2010) proposed six potential causes of the 6th mass extinction:

  • Co-option of resources from other species
  • Fragmentation of habitats
  • Species invasions
  • Pathogen spread
  • Direct species extermination
  • Global climate change

We concluded that (as always) a combination of factors was most likely to blame, unsurprising. When probed further about which cause had the biggest effect, habitat fragmentation and climate change were equally tied. To fix these two problems however, would require very different approaches.

We discussed the infamous Living Planet Index (LPI) Report and whether it can be used to infer biodiversity’s impending doom (as is the WWF’s wont). The main summary of the report is a graph showing how the index value (essentially an additive measure of population change since 1970) has steadily declined. However, population change ≠ extinction rate. Additionally, the index biases certain taxa (charismatic vertebrates), only 4.8% of the world’s vertebrate species have been analysed, it follows that those species which have been studied sufficiently to be included are also the ones most likely to be experiencing decline due to their contact with humans.

LPI

Figure 2 – From the Living Planet Report 2014. Showing the general decline in vertebrate population abundance, relative to 1970 levels.

To round up our discussion we discussed whether any of us would be willing to say whether we are or aren’t currently in a mass extinction. The overwhelming decision was that we don’t have enough data and it’s too early to tell.

P.S. A tip for other discussion leaders, learn from me and don’t give your charges too much extra reading to do, keep the reading focussed to ensure a good discussion on a few topics rather than briefly touching on lots of points.

By John

The challenges of biodiversity conservation

In our last critical thinking session for this semester we discussed the challenges to biodiversity conservation and the tools that might help us to overcome them. We built upon knowledge gathered from the Conservation Science course and decided to focus on the practical sides of conservation.

Firstly, we explored the role of conservation scientists in the modern world. We had all watched a presentation by Dr. Richard Fuller, of the University of Queensland, in preparation for our discussion. Dr. Fuller talked about how as scientists we have a responsibility to make sure our findings have real world implications, that conservation science informs policies, urban planning and future developments. We brought up the importance of avoiding bias and acknowledged that there is a need for a mediator between academics and policy makers. Nevertheless, we agreed that conservation scientists should collaborate with local people and government officials to offer practical solutions to biodiversity issues. This work, however, involves negotiation, trade-offs and consideration of ecological, social and economic factors. Making decisions in such a multidisciplinary context will benefit from a set framework, which takes into account all aforementioned factors and outlines the scenario which best answers to them.

Marxan

Image source – University of Queensland

 

One such product, developed at the University of Queensland, is MARXAN. MARXAN is a decision-making software which aims to give the best solution to the ‘minimum set problem’ – how do we achieve a certain biodiversity target in the most-cost effective manner. It can be used to highlight areas of high biodiversity value for future designation as protected areas, as well as evaluation of already existing protected area networks. The program uses planning units – a grid of 1x1km squares. The following information is inputted for each planning unit – land use type, species distributions and economic cost of land. We can then select a conservation target (e.g. protecting 25% of the distribution of amphibian species in Tasmania) and the result from running the software is a selection of planning units which meet this criterion for the smallest cost.

Once we were familiarised with how MARXAN works, we discussed the value of this product for the advance of conservation science. Although one can argue that economic cost should not be part of this analysis and biodiversity should be protected regardless of the cost, we all agreed that MARXAN is definitely a step in the right direction. We live in a finite word, where both natural and monetary resources are limited, and using MARXAN helps optimise protected area networks. The only other shortcoming we found was that it uses a proxy measure, boundary length modifier, to assess connectivity. Connectivity between protected areas is key for maintaining viable populations in the long-term. Although the boundary length modifier addresses this issue, we are eager to see if the new version of MARXAN will offer an improved solution.

MARXAN is designed to follow the principles of systematic conservation planning, known as CARE principles.

Comprehensive (Connected) – every type of ecosystem is protected and there is connectivity between the protected areas
Adequate – the conservation target we have chosen is adequate for maintaining ecosystem functionality and integrity
Representative – there is enough of every kind of biodiversity protected, e.g. insurance policy against fires and catastrophic events
Efficient – achieves the above targets for the least amount of money

Discussing efficiency and evaluation of protected areas continued with a critical look at a paper titled ‘Replacing underperforming protected areas achieves better conservation outcomes’ by Fuller et al. (2010). Conservation targets tend to revolve around designating more protected areas and the widely-accepted solution to most biodiversity problems is the expansion of protected area networks. Protecting habitat is seen as a surrogate for protecting biodiversity. The paper we read offers an alternative approach and stresses the importance of evaluation. Although having 13% of the Earth’s land surface under some sort of protection is one of the biggest successes of conservation, it is important to also monitor protected areas. Are they actually protecting biodiversity? Are the populations within them stable? As shown by Fuller et al. (2010) protected areas are not always successful and in that case, is it worth maintaining them as such? Degazetting protected areas has to be done with extreme caution and it is not yet certain for how long we need to monitor them before deciding they are not successful surrogates of biodiversity.

Tasmanian Tree Frog

The Tasmanian tree frog (Litoria burrowsae) is endemic to Tasmania and its populations are declining due to a wide-spread disease. Establishing a protected area network that follows the CARE principles can help boost this species’ resilience. Image source – Wild Side Australia

We all agreed on the importance of evaluation and on using realistic targets. We are studying ecology because we love nature, and we often have an intrinsic desire to protect all of it. This is rarely possible in today’s world of global urbanisation and rapid population growth. For example, if all of the populations of amphibians in Tasmania are to be protected, 99% of the state’s land has to be set aside for conservation. This is obviously not a realistic target, and instead, we ought to focus more on a target that allows humans and nature to co-exist.

Assessing the performance of protected areas is especially important as the effects of climate change are predicted to become more pronounced. Species will shift their ranges, which might place their populations outside of the protected area boundaries. We talked about examples from around the world, including African national parks, of misplacing protected areas. This is indeed a case of ‘so close, yet so far’ – Lisa told us about lions roaming right outside of the fences of the park that was put in place to protect them.

In this session of critical thinking, we set out to discuss how we move forward in conservation. We decided that translating scientific knowledge into real life actions is paramount for effective conservation. This process can be further facilitated by software such as MARXAN, which considers ecological, social and economic factors. Conservation science emerged as a ‘crisis discipline’ and there is a long path ahead before the crisis is averted. It is important to evaluate our actions at each step of the way, as a means to maximise benefit for both people and environment.

By Gergana

Headlines and heretics .. and what about the methods?

We’ve been thinking about how data, policy and ideas are presented and communicated – among politicians, researchers, reporters – between these groups and to the world at large. What makes news “news”? How do the “headlines” of journal articles affect our own perceptions? Where to introduce the Methods? Do “beliefs” among scientists taint objectivity? How have events such as “Climategate” affected public confidence in scientific integrity? We have investigated through the topic of climate change and carbon management, with a continuing angle on soils.

Do beliefs have a place in science?

By Christie. The role of the scientist is effectively to find out the truth in a situation. In order to do this, it is arguably a prerequisite that the scientist must start with a neutral stance. For example, in a court case, the jury must have no connection to the tied individual, or pre determined opinion of their guilt or innocence. In this instance, the trail would be considered bias. If a scientist has a pre-determined belief in a concept and is providing evidence for it, how much can we trust this evidence? There is a large difference between believing the evidence put forward is correct, and believing in a theory and presenting supporting evidence. In the name of belief, the latter situation can be subject to scientific misconduct. This has occurred over the years concerning the climate change debate, where information has been manipulated to provide evidence for a desired scenario. Other than creating distrust in the scientific community, this also represents a halt to valuable scientific development, where no real progress is being made due to the importance to belief. This is not to say that belief has no place in science. Contrasting ideas between scientists has stimulated interesting debate and increased research. So belief has a place in science, if it is in the name of its own valuable development.

Reference: Avery, G., 2010, Scientific Misconduct: The Manipulation of Evidence for Political Advocacy in Health Care and Climate Policy, CATO Institute, Briefing Paper no. 117. Warming hoax

Climategate

By Christie. In 2009, the Climatic Research Unit at the University of East Anglia was hacked. The hacker accessed thousands of documents and emails and posted them to multiple online destinations, just weeks before the Copenhagen Climate Change Conference (COP15). This became highly controversial because sceptics argued that the emails exposed climate change as a conspiracy made possible by scientists altering data to disprove critics. The Climatic Research Unit claimed this was untrue and that the emails had been taken out of context. This was investigated by eight committees who ultimately reported that there was no evidence of misconduct or conspiracy. However, did Climategate leave behind a legacy? Despite the timing, Climate Gate did not have a significant presence at COP15. However, since then, in countries such as the US and the UK, Climategate provided ammunition for politicians who were not supportive of efforts to reduce GHG emissions. It affected science in that many climate scientists were attacked with freedom on information requests, inquiries and in some cases law suits. The Intergovernmental Panel on Climate Change (IPCC) also came under fire, and was questioned on its capability and neutrality. Despite all of this, the scientific consensus that anthropogenically induced climate change is happening is unchanged, and had remained that way throughout the investigations.

Reference: Leiserowitz, A.A., Maibach, E.W., Roser-Renouf, C., Smith, N., Dawson, E., 2013, Climategate, Public Opinion, and the Loss of Trust, American Behavioural Scientist, 57(6), pp-818-837.

“Four per thousand”

In the run up to the Paris conference (COP21) a small news “story” surfaced, which connected to material in our last tutorial. It was hardly a major headline, but seemed to commit France a strategy to store carbon specifically using agricultural soils. We wondered about the origins  of this.

By Hannah. It is believed that limiting global warming to a 2C rise without significant reductions in agricultural emissions is not achievable. An alternative high potential method to reduce emissions is to sequester more carbon in soils, which has led to the French Government 4 per mil initiative, first proposed in the March 2015 conference on Climate Smart Agriculture. The French Agriculture Minister began the program called “4 per 1000” this week during climate talks in Paris, the target of which is to increase carbon in soils by 0.4 of one per cent per year.

How France plans to achieve this target is not yet clear, however, it is hoped through the maintenance and restoration of natural bogs and wetlands, reduced conversion of grassland and forests to arable land and the reconversion of arable land to grassland, the targets will be reached. Further efforts to increase soil carbon will include aiding returns of plant biomass to the soil through the improved use of organic manures and increased use of cover crops and ploughing in farming.

An alternative interesting global scheme is the Australian Soil Carbon Accreditation Scheme (ASCAS). Australia is one of the few countries to have a national carbon regulatory regime which recognises the importance of soil carbon sequestration. This system allows farmers and land managers to earn carbon credits by storing carbon or reducing greenhouse gas emissions on their land. Stored soil carbon can even be sold by the farmers as offsets on the open market.

Earth on Fire

Elsewhere carbon in soil was on fire and creating a blip in CO2 emissions, discernible  in the global inventory.

By Hannah. Peatlands act as a major global sink for carbon, storing carbon in amounts equal to the size of the current atmospheric carbon pool, despite covering only 2–3% of the Earth’s land surface (Turetsky et al., 2015).  Thus, in the fight against global warming this vast source of carbon becomes increasingly important and has been described, somewhat dramatically, as a ‘carbon bomb’.

The high moisture content of peat naturally protects peat soils from burning, however, as a result of climate change and human activity extensive areas of peat are beginning to dry as the water table drops. Drying has resulted in peat becoming vulnerable to more frequent and severe burning.

Peat firePeat fire in Selangor, Malaysia

While increased combustion of peat can be seen in both tropical and boreal peatlands, Indonesia is currently in the spotlight of the Media. Indonesia is home to 84%  of Southeast Asia’s peatlands. In 2015 a strong El Niño climate phenomenon created drought conditions that fuelled extensive fires across the county, with over 127,000 detected fires in 2015. In 2015 the fires released 1.62 billion tonnes of CO2, a value on par with total annual emissions from Brazil. The fires alone have tripled Indonesia’s entire annual emissions.

The fires are started both naturally, due to lightning strikes or intense heat, and deliberately, to clear the forest for plantations of commodities such as palm oil. Peat fires often start above ground but move below the surface, where they can smoulder for weeks to months, despite rain events or changes in fire weather (Turetsky et al., 2015). The consequences of the Indonesian fires are widespread, from climatic costs to habitat loss and human health issues.

What is in a (scientific) headline?

By Alex. We recently examined an arguably misleading title to a journal article, ‘Fire-Derived Charcoal Causes Loss of Forest Humus’. The article itself, published in the well respected journal Science, does not fulfil the claim of the title (Wardle et al, 2008). The change in cumulative mass and carbon concentration between filled mesh bags of three compositions is compared: hummus only, mixed hummus and charcoal (50:50) and charcoal only. Reported results do suggest loss of mass and carbon release from mesh bags- buried within the soil- and an enhanced effect of charcoal (figures A and B).

16-01-27 image

However the author then jumps to the conclusion that these components are lost from the system and not merely through the mesh bag as colloidal material. The title generalises the application of findings to all “forest humus”. Unfortunately, the three sites of boreal forest in Sweden cannot reflect climatological and soil conditions of all global biomes. This shortcoming is acknowledged within the text begging the question, why use this title at all?

This lead us to question both the role of the title and the author’s motives for misleading readers. There is no question that the scientific reporting itself does not aim to pull the wool over the eyes of readers and such articles would merely be tossed aside during the peer-review process. However clever choice in titles have been linked to: increasing citation level by generalisation; catching the attention of media outlets; simplification to inspire the public to read literature; forced prior misconceptions and false claims based on titles alone. Numerous studies are striving to develop a “perfect” title in terms of length and the inclusion of the popular buzz phrases like ‘climate change’.

References:

Hartley J. 2005. To attract or to inform: What are titles for? Journal of Technical Writing and Communication 35(2): 203-213.

Farrokh H and Mahboobeh Y. 2010. Are shorter article titles more attractive for citations? Crosssectional study of 22 scientific journals. Croatian medical journal 51(2): 165-170.

Lehmann J and Sohi SP. 2008. Comment on “Fire-derived charcoal causes loss of forest humus”. Science 321(5894): 1295-1295.

Wardle DA, Nilsson M and Zackrisson O. 2008. Fire-derived charcoal causes loss of forest humus. Science 320(5876): 629-629.

Where do methods belong in a journal article?

By Jakub: In most scientific journals the research methods section usually comes second after the introduction. By placing methods section after the introduction, the reader gets to know how project was executed before he gets to the results and discussion. In a renowned scientific journal Nature (and it’s ‘family’, including e.g. Scientific Reports) – the editorial policy demands methods to be placed at the end of the paper after the conclusion.

This may be a solid decision because majority of readers (be it students or scientists) are interested in the results of the study and their implications (sections which carry the primary message of the study) and therefore skip methods section. It may provide an efficient transition from introduction to the topic directly to the results without interrupting the string of thought by methodology, which is usually very technical and not comprehensible by general public. In addition to that, methodology is primarily referred to when somebody wants to reproduce the study to verify the results. Another instance of paying special attention to research methods is when one wants to identify flaws in the design that could lead to inaccurate results. In any of these cases the reader is an expert who spends extra time on this paper and therefore will not mind where the methods section is placed.

On the other hand, certain information from methodology is vital in order for the reader to understand results and thus has to be included somewhere in the text. Scientists are therefore forced to fit in bits and pieces of methodology into results section, which in times can be non-comprehensive – as we agreed on in this week’s session.

Inspired by the soil

Saran – we’ve been looking at the question of how soils fit into the global carbon cycle, drawing on a paper by Scharlemann et al. (2015) in Carbon Management , and Mishra et al. (2013) in Environmental Research letters.

These more general questions came up from our discussion:

  • It is the UN Year of Soils; the UN Food and Agriculture Organisation (FAO) has put out some animations on YouTube. Who are they for, do they engage, are they effective in conveying the most important information? How many people in the world have viewed them?
  • Who led and who contributed to the Carbon Management paper and why? How did this group get together – and were they successful in their aims (is their recommendation clear)?
  • Is the number on which Fig 1 converges likely to be an accurate number? In general, do we research and refine only numbers that have a large initial estimates (leading to downward revision)? What about issues that are erroneously thought at first to be small or trivial?
  • How do we get the right balance between an emphasis on stocks and fluxes of things?
  • Extremophiles: microbes that might inhabit Mars, oil wells – or permafrost. Can we really assume that nothing is happening below 4⁰C?

Matthias – https://youtu.be/invUp0SX49g – is animated in a very professional fashion and certainly grabs and doesn’t lose the attention of the viewer. It is informative and summarises the key importance of sols very well I think. The overall tone of the video is rather pessimistic. However, the hope lies in imposing legal protection of soils and this message does get delivered effectively. https://youtu.be/TqGKwWo60yE – uses fast moving headlines to explain the importance of soils (no narration). It does visualise very well I think how fast soil is degraded and lost and how long it takes for only a few centimetres of soil to develop. Though, I think it does not really suggest how the individual could do anything about it.

Hannah – The following figure from Scharlemann et al., (2015)  shows the estimates of global soil organic carbon (SOC) since 1950. The variation in these estimates represents the discrepancy in sampling techniques and calculations, which appear to be just as prevalent today as they were previously. This occurs to such an extent that even authors using the same data have arrived at differing total estimates of global SOC stock estimates!

When we take an average of these previous studies, global stocks of SOC are estimated at 1500 billion tonnes. Despite the general arrival on this value by a number of studies, questions regarding the accuracy may still be posed. While there are multiple reasons for doubting the accuracy of this value, the main reason for doing so is the soil depth the studies have used. These studies used existing distributional data that usually only provide data to a depth of 1 m, which means the current estimate is undoubtedly lower than the true global SOC. For example, studies have reported that measuring SOC up to 3 m depth yielded SOC estimates 1.5-times greater than those to only 1 m depth.

Considering that peat soils containing SOC in the tropics are up to 11 m deep, there is an extremely high potential that a global estimate of 1500 billion tonnes of SOC is a gross underestimate. This highlights the need for another look into our global stocks of SOC.

CRIT

Estimates of global soil organic carbon stocks from the literature through time – Figure 1 in the paper of Scharlemann et al. (2015).

Jakub – Understanding of stocks and fluxes is both equally important in terms of climate change and soil management, because they are intrinsically interconnected. Although stocks are very important, fluxes should be given higher priority because they determine the amount of carbon that is emitted and thus are the main drivers of climate change.

Christie – The bacterium Planococcus halocryophilus is an example of an extremophile, owing to its ability to grow at -15⁰C in soil, and has been found in permafrost in Canada. So we cannot assume there is no decomposition in permafrost. The microbe also has traits which could sustain life on Mars .. on the basis of temperature at least!

References:

Scharlemann JPW, Tanner EVJ, Hiederer R and Kapos V (2014) Global soil carbon: understanding and managing the largest terrestrial carbon pool. Carbon Management 5: 81-91  see:  http://www.tandfonline.com/doi/pdf/10.4155/cmt.13.77

Mishra U, Jastrow JD, Matamala R, Hugelius G et al. (2013) Empirical estimates to reduce modelling uncertainties of soil organic carbon in permafrost regions: a review of recent progress and remaining challenges. Environmental Research Letters 8: 035020 see: http://iopscience.iop.org/article/10.1088/1748-9326/8/3/035020

Nadia CS, Mykytczuk RCW and Whyte LG (2012) Planococcus halocryophilus sp. nov., an extreme sub-zero species from high Arctic permafrost. International Journal of Systematic and Evolutionary Microbiology 62: 1937–1944 see: http://ijs.microbiologyresearch.org/content/journal/ijsem/10.1099/ijs.0.035782-0

Shouldn’t we know by now if biodiversity is increasing or decreasing?

Biodiversity is the variety of species of living organisms of an ecosystem. Biodiversity is thought to boost ecosystem productivity where each species, no matter what shape, size, how rare or abundant they are have a role to play in their ecosystem.

Whether or not biodiversity is increasing or decreasing depends on the timescale in which you are looking as well as the spatial scale (e.g. local, regional, global). This makes it difficult to truly assess whether we have more, less or the same number of species. We based our discussion on: Dornelas et al., 2014, Vellend, et al., 2013, Brooks, et al., 2006, Hooper, et al., 2012 and McGill, et al., 2015.

We analysed the paper by Dornelas et al., 2014 which covered 35,613 species in their study. Although we decided with our critically thinking minds that when you initially read this figure of over 35,000 species it does not seem like that many relative to the 8.7 million eukaryote species there might be on Planet Earth!  However, when put in perspective, we decided that the study does have value. It includes data from more species than a lot of studies to date have managed to compile and realistically you don’t need every single species to draw valuable conclusions. Overall, we thought that it would have been more precise if Dornelas et al. 2014 had just focused on marine studies than to try to include terrestrial species as well, as we weren’t sure how comparable the marine and terrestrial data sets were in these analyses.

A concern of ours was that large geographic regions of the world absent due to data collection bias. This moved us on to discussing the repercussions of sampling biases. In the Vellend paper and Dornelas’ date came mostly from temperate, well developed areas where active ecosystem management has occurred for a number of years as shown in the following two maps.

The first map is from Vellend et al. 2013 and the second from Dornelas et al. 2014 paper showing the areas where spatial gaps in biodiversity data occur. Seeing these gaps made us think about where biodiversity research occurs globally and how certain regions are probably under represented in the ecology literature, such as for example Sub-Saharan Africa. We also pondered how the spatial biases with climate data are similar to those in ecology (third map), and how the same regions of the world are often understudied in a variety of different fields.

Although the results from the underrepresented areas seem to be consistent with the patterns found in well-studied parts of the world.  We were unconvinced that we definitively know what biodiversity changes at local scales might be across the planet as a whole without data from the regions with the fewest studies due to the vastly different ecosystems found in these locations.

An overarching problem in these global analyses is that, even analyses that are as holistic as these, have a bias of data collection in terms of what taxa were sampled. Microorganisms make up the vast majority of the species on earth, yet they are vastly underrepresented in the ecological literature because generally there is less interest in studying them (sorry microbes!). We agreed that we are very appreciative of the folks who go out to study microbial biodiversity out in the real world!  Thanks you guys!

The reduced emphasis on microbial ecology is likely because it is time consuming research, where species identification is very tricky and with many species that carry out the same or similar functions within an ecosystem. For example, if you removed a microbe from a system, it is likely that another will fill the niche and there will be no change in the overall ecosystem function. On the other hand, if you altered the top of a trophic pyramid by removing an apex predator the cascade of effects would be more noticeable. We discussed the concept of functional redundancy (which you can read more about here) and that it could be used to disperse conservation resources based on the usefulness of a species within an ecosystem.

To summarise, many scientists have attempted to track biodiversity increase and decrease and many of them come to different conclusions depending on the scale at which they conduct their study, as shown in the following graph. We agreed that an important thing to take away from this discussion is something highlighted in the Dornelas paper; it is important to think about biodiversity change, and it is not necessarily the increases or decreases that we should care about, but the effect that biotic homogenisation could have on our world.

Biodiversity Change across scales

Hypothesized species richness changes across scales (Myers-Smith et al. unpublished).

Are biodiversity changes the same across different scales or do we see different patterns depending on which scale we look at? (Figure from Isla’s Conservation Science Lecture on Biodiversity Changes)

By Hannah

The 100-word challenge

Saran:

We liked the paper and appreciated the diagrams. I thought it interesting that Hulbert refers to statistics as possessing an impoverished vocabulary, with the word “error” (for example) being used in reference to many different issues. Only the “pseudo-replication” term has really been adopted. The paper is clearly well-cited. But as part of a new discussion that broke out 20 years later on (Hulbert SH. 2004. On Misinterpretations of presudoreplication and related matters: a reply to Oksanen. Oikos 104:591-597) Hulbert suspects that not everyone who has cited the paper has read it. (“Pseudo-issue” – funny – and specifically notes that he did not originally cite Likens et al. 1970 – our Week 4 reading – as a culprit.) So I’m glad that we read it. Did we understand it? Below are five paragraphs below – un-edited post-tutorial contributions from our group.

There were two aspects to the side challenge that led to these paragraphs. One was to correctly summarise the issue of pseudo-replication in 100 words. The other was to do so in a way that can be readily understood. This is a reasonable request. Marketing campaigns are aimed at people with seven full time years of education (say aged 11). In general we should aim no higher than those with nine. Scientific journal articles might aim a bit higher – but not much – another couple of years. So there are various readability scores available and Word calculates Flesch–Kincaid for you, if you tick the right box under “grammar check”. Websites such as www.readability-score.com are useful if the piece is critical. They provide a result for other indices as well – though generally fairly similar results. Does each index work equally for scientific and non-scientific writing? Do they work well in general, for scientific writing? The paragraphs are arranged in order of ascending Flesch-Kincaid scores. So we were aiming for a score of 9.0. The first one below hits 9.8!

Team members:

  1. Coming to the wrong conclusion in your experiment because you did not do enough replicates, or did not replicate correctly so are unable to trust that your result is correct and due to the reason you were testing, rather than outside factors. For example, you are testing 2 types of potatoes to see which grows bigger. You decide to plant 20 of each type (20 replicates). You plant type 1 at the top of a field and type 2 at the bottom, however these replicate are not spaced out and are not true replicates. You may conclude that type 1 grows bigger; however, this is actually due to drainage within the field. The bottom of the field is waterlogged and type 2 is unable to grow. To correct this, you could randomly assign each of the 40 potatoes a position in the field.
  1. Let’s say you want to find out which of two crops are more suitable for your field because they grow better. So you plant crop A in the western half of the field and crop B in the eastern half. After your experimental season is over crop A has developed much less than crop B so you figure, that crop B must be a more suitable crop, since it grows faster. This is pseudoreplication, because the fact that one crop grows better than the other is due to factors you haven’t controlled for: i.e. the western half gets much more intense sun hours, or has less nutrients, (worse soil quality, etc.) so you cannot say that the performance of the crop isn’t due to properties of the field. If you do true replication you would randomly allocate your crops within your field or do a random block design. Now all treatments have an equal probability of being affected by unknown sources of variance.
  1. Let’s take an example. You want to investigate whether forests or grasslands have higher soil moisture content. Pseudoreplication happens when you take only one patch of forest and grassland each and take multiple measurements of soil moisture only from these 2 locations. It does not represent the behaviour of forests and grassland in general as was first intended, but only compares properties of those 2 locations. To avoid pseudoreplication one must take measurements from multiple patches of forest and grassland to get an idea how these two habitats behave generally. To avoid pseudoreplication in most of the experimental designs, samples should be independent, i.e not sharing characteristics that are different from other samples.
  1. For the purpose of this explanation we investigate the effectiveness of a de-worming treatment on Sheep (worms being parasitic, harmful to sheep survival and development). The worm density should be measured with (a) and without the treatment (b). Pseudoreplication, which is bad experimental practice, would happen here if the repeated treatments (a&b) were to happen on individual sheep only, the same species of sheep only or on sheep found in one location only. The measurements of worm density in the sheep gut are all linked by shared organism, species-level traits or environmental condition no matter how many measurements are recorded. To gain independent measurements of treatments we need to capture the range conditions we can see.
  1. If the repetition of your experiment is to produce multiple copies of the same treatment which share the entirely same variables/ manipulation/ situation/ site, then the factors are not independent and using the results of this in your experiment will mean it is pseudoreplicated. E.g if you are testing the number of rabbits in coniferous forests, if you do multiple replicates in one or two sites of one confer forest this will be pseudoreplication because the effect you find will not be independent. Instead you should do fewer replicates in multiple forests.

Experiments lasting multiple lifetimes?

In week 4 of our critical thinking tutorials, we tackled a comprehensive paper by Likens et al., (1970) describing the original Hubbard-Brook experiments. The study aimed to determine the effect of afforestation and herbicide application on:

  • The quantity of stream water flowing out of the watershed
  • Chemical relationships within the forest ecosystem (e.g. nutrient relationships and eutrophication).
The Hubbard Brook Experimental Forest.

The Hubbard Brook Experimental Forest

What did the study do that has never been done before?

In the 1960s, this was the first small watershed approach to study elemental budgets and cycles. Ultimately it is also the first study to have led to the development of a Long Term Ecological Research Network, which now plays a key role for long-term scientific studies in the United States.

We, as a group, were struck by the magnitude of the original study. The fact that the experiments were carried out on an entire watershed ecosystem with multiple watersheds to apply treatments to is incredible. Looking more into the ongoing study now, the study site has been expanded and encompasses even more individual watersheds so that multiple replications of the treatments can be applied. Although this was all impressive, we began to question how representative this study might be for watersheds in other environments with different site characteristics….

How widely should the conclusions of the study be applied?

We noticed the Hubbard Brook watersheds have certain site characteristics that may not be present in other watersheds throughout the world. For example, the area is surrounded by northern hardwood vegetation and the streams are located on watertight bedrock. Different site characteristics may influence the stream characteristics being measured. We acknowledged the difficulty of carrying out global experimentation and the lack of funds, which calls for a reasoned reduction in the number of study sites, perhaps focusing on ecologically important areas. Generally, no site can be representative of an entire region but extrapolation of local patterns to global scales is central to the application of research insights and policy-making.

More points on the study design…

We discussed what we would change if we could carry out the Hubbard Brooks experiment again. In general more samples of stream water and precipitation should be taken in time and space. More replicates of the actual watersheds would also be needed to ensure there was no pseudoreplication (see last week’s blog post!). We then began questioning the reason for the herbicide application and how this would affect the results. We also questioned whether the prevention of regrowth would prevent the ecosystem’s capacity to buffer the effects of deforestation.

Buffer Capacity and LTER

How important is the environment’s ability to buffer against disturbances? We stressed the importance of the buffering capacity of ecosystems for the future in light of climate change and ongoing human disturbance.

To monitor changes within ecosystems, such as stream flow and nutrient budgets, we discovered the Long Term Ecological Research Network. This is a network of study sites that are being monitored over several decades. The aim is to extract trends from data sets put together using common protocols and approaches. Originally we thought this was a great idea but questioned the feasibility of conducting such research over the world as a whole.

LTERnetwork

The US Long-term Ecological Research Network

Long-term trends can be valuable for testing individual effects of various factors (e.g. climate change and habitat destruction) and could be useful for policy making in the future. However, delving into the research network further, we discovered that the majority of funding comes from the National Science Foundation, and the grants are given in a 6-year renewal format. It is more difficult than we initially thought to get long-term funding, with the threat of being cut off after 6 years, which would significantly reduce the value of the long-term time series as a whole.

Overall, we did find the concept of a long-term research network useful and would encourage the idea of having something similar in Europe. Especially since the main aims of the American version are:

  • Make data more accessible to the broader scientific community using common data management protocols, cross-site and cross-agency research
  • Participate in network level and science synthesis activities

Perhaps the LTER concept can, if adopted throughout the world, contribute to developing more effective global data sets that will have generic data collection methods and metrics. In the future, this collaboration could reap major rewards for the analysis of long-term ecological data, which could result in more focused science and effective policy-making.

By Lisa