Learning from the past: why do we go wrong?

A few weeks ago, I ran a discussion group hoping to discuss the primary cause of the Pleistocene megafaunal extinction event. Archaeological evidence suggests that up to 85% of large fauna weighing over 44kg went extinct on several continents and many islands towards the end of the epoch. Given the timings of these events, two major, competing hypotheses still rage on in the literature today, almost with ideological fervour. The first hypothesis is that the extinction of these fauna coincide with the retreat of the Last Glacial Maximum (LGM), and that many of these larger, already food-limited species were simply unable to adapt to the warming climate. The second hypothesis places us at the centre of the scene: the bloody culprit, Homo sapiens and predecessors thereof. Proponents of this “overkill hypothesis” argue that the last known fossils of these extinct genera and the first known evidence of human appearance coincide in the fossil record in each of these locations too often for it to be a coincidence. One of the reasons why the argument has become controversial is that some people have begun to question our moral requirement to pay ecological reparations of kind. If humans were indeed the primary cause behind such a large extinction event, do we feel an obligation to make amends?

So we mused and puzzled over these lines of evidence, trying to determine within the short space of an hour whether one was more convincing than the other. However, shortly into the discussion, there came one point in which we became quite unstuck. Instead of weighing up the data as if they were equal, we started wondering; how can anyone be sure of causation using archaeological evidence at all? It was quite unlike the data sets that we are used to. So, say you are digging in the Yukon Territory in Canada and you come across several sets of equid fossil species spanning several thousand years, clearly these horses are getting smaller over time you observe, clearly humans weren’t in this region at the time you think, clearly indicative of environmental pressure of the glacial retreat you surmise, clearly reflective of the climate change hypothesis you conclude. Excellent. A nature paper awaits! But wait. It is likely that you may have committed quite a few of the major issues associated with extrapolating information from fossil data in your haste.

Firstly, you are likely suffering the plague of all paleo-scientists: taphonomic bias. Taphonomic bias is the process by which we are only privy to the type of data that can inherently survive throughout geological time. Soft materials, smaller fauna and events that took place in raging rivers are less likely to be preserved than their counterparts, leading to the under- or over-representation of certain taxa or events. Whole species, whole ecosystems could have occurred during this time, ones that we may never even know they are missing. A potentially very significant source of taphonomic bias that we came across during our discussion was the timing of the first hominid arrival into North America. The current general consensus is that this fell somewhere between 13,000 and 20,000 years ago. However, there are some pieces of evidence that suggest it could even be as far back as 50,000. The difference in the upper and lower ends of the estimate could have significant consequences for how much blame we can attribute to humans for the extinction event. Humans that arrived only 13,000 years ago were indeed quite pushed for time when trying to condemn equid species to extinction. Of course you only have to look back to see that the presence of a rather large ice sheet over North America until about 20,000 years ago could have had significant impacts on whether we would be able to detect their presence beforehand.

Another pitfall when trying to make conclusions based on this type of data is that it is impossible to conduct experiments or make observations because there are no factors to manipulate and no events to watch. Determining causation becomes, at best, an educated guess and using contemporary situations as a stop-gap can be no more illuminating. In his paper, Guthrie (2003) states that past equids were obligatory grazers based on the assumption that modern-day horses can be used as a proxy measure for their predecessors feeding habitats. However, using modern-day analogies to try to extrapolate into the past can also be very misleading. Contemporary equids are adapted to a different climate, different predators and by now a range of anthropogenic factors.

metacarpal

Figure 3 This data set examines the change in body size of equid species over time. The equid species shows an Alaskan extinction event at 12,510Ka. The significance is this decline is supposed to be indicative of environmental pressures coinciding with the end of the LGM but not with hominid arrival on the continent.

The one thing that could help to mitigate these issues – a plethora of data – is also seldom available when using taphonomic data. Given how difficult it is for data to make it through the battering of time, it is not all that surprising either. In the figure above, the sparsity of measurements has meant that the author has made statistical inference about this relationship based on one data point over a 10,000 year time period. Without this solitary data point, would the relationship still hold true? Would the timings of events still line up in a way that supports his argument?

So assault after assault, how do these questions ever get answered? It seems almost overwhelming and we certainly weren’t going to solve it in the hour that we had. What we did get, however, was an insight into a surprisingly dynamic field for an event which may or may not have even occurred when humans were around to witness it.

By Josie

References:

Dale Guthrie, R. (2003). Rapid body size decline in Alaskan Pleistocene horses before extinctionNature, 426(6963), pp.169-171.

Ripple, W. J., & Van Valkenburgh, B. (2010). Linking top-down forces to the Pleistocene megafaunal extinctionsBioScience60(7), 516-526.

Soligo, C. and Andrews, P. (2005). Taphonomic bias, taxonomic bias and historical non-equivalence of faunal structure in early hominin localitiesJournal of Human Evolution, 49(2), pp.206-229.

Advertisements

Rarity and conservation status: Hands on experience conducting ecological research

We, the Biodiversity Critical Thinking tutorial group (a.k.a. Team Isla), decided to undertake a mini data synthesis project at the end of our Critical Thinking Tutorials to test the research question: “Are rare species more likely to be declining in abundance than common species?”

This project is a learning experience in how science really works and a chance to test a really cool question, that we don’t think has ever been explicitly tested in this way before.

STEP 1: Coming up with our research question

We began by each suggesting a question in ecological or environmental science we were interested in investigating, these included:

  1. Rewilding: where has it been done? Is it realistic?
  2. Are native species better at providing ecosystem services than non-native? Do native species have higher productivity?
  3. Antarctic – is it the world’s largest biodiversity cold spot? Is biodiversity changing faster there and could it be a new hotspot one day?
  4. Home made electricity – what is the potential? Is it cost effective? What are the options? Could we look at an already self-sustaining village (e.g., Findhorn) and figure out the best way to make a difference in Scotland?
  5. How do current protected area maps overlay with future species ranges and climate models?
  6. Are rare species really common and common rare? What does this mean for conservation?

By a voting elimination we decided on idea number 6: Are rare species really common and common rare?

Brainstorm

We had lots of fun creating our brainstorm from which we sprouted our ideas.

Brainstorm2

This only captures some of the wonderful discussion that was held.

STEP 2: Defining our methods and collecting our data

First we discussed what rare versus comment meant and decided that rareness versus commonness relates to some combination of the following things:

  1. The geographic range extent
  2. The local abundance
  3. Habitat specificity

We agreed that there were ways we could explore the first two components of rarity/commonality using available data for UK species.

We also discussed how we wanted to link rarity to conservation status and that population data over time, would allow us to calculate whether a species has a declining, stable or increasing population – an indicator of whether a population is threatened.

We started out by downloading the Living Planet Index (LPI, http://livingplanetindex.org/data_portal) data of populations for species all around the world.  We then subsetted out only the species that have population records from the UK.  These 211 populations will be the basis for our data analysis.  Then we decided we could use the Global Biodiversity Information Facility (GBIF, http://www.gbif.org/) to download occurrence data for all of these species and then use those occurrence data to calculate the geographical range extent for these species.  We can also use their population numbers and the number of occurrence records to estimate their local abundance.

We encountered some difficulties with downloading data from these data portals including formatting issues with the data, multiple data sets for the same species or for subspecies within the same species, inconsistent taxonomy, and more!  But, with a bit of R code and some patience, we got everything sorted!

At moments we felt a bit like we were in the scientific cloud – but I think we will make it out the other side.

https://www.ted.com/talks/uri_alon_why_truly_innovative_science_demands_a_leap_into_the_unknown?language=en

STEP 3: Testing our hypotheses and writing up our results

With much excitement and two pieces of R code and many many .csv files of data from the Global Biodiversity Information Facility and the Living Planet Index we came together for our last Critical Thinking Session to analyse our data and test our hypotheses.

Here are our research question and hypotheses that we clarified in our last session.

Research Question:

Are rare species (those with smaller geographic ranges and lower local abundance) more likely to have declining population trends than common species?

Hypotheses:

Range extent negatively correlates with rate of population change

H1: As geographic range increases populations will have a lower rate of population change.

H2: As geographic range increases populations will have a higher rate of population change.

H0: The rate of population change will not vary with geographic range size.

Now that the data were in hand, our methods were finalized and our hypotheses clarified, we were ready to begin to “unwrap our data present”, the idea of finding out the exciting result from a previously unanalysed dataset – a term coined on TeamShrub, Isla’s research group.

But wait, there was one problem!  We hadn’t actually merged our LPI slope and GBIF range size data, this should be a quick fix we thought, but that is where we were wrong.  We had downloaded the LPI data using the common names for the species and the GBIF data using the Latin names.  All we needed was a key to link the two, but in the process of setting up our key, we did something wrong and the two datasets just wouldn’t merge.  Sadly, we learned an important lesson about using a common taxonomy and we used up the remaining minutes of our session trying to finish the analyses and not getting to the final data present unwrapping, but I guess this is a very realistic real-life experience of how science goes sometimes!

Frustratoin

After the session and over e-mail we got the merging problem fixed and the data present is now unwrapped and the results are very interesting!!!  Stay tuned for the write up of our mini science project to discover what we found out about populations of common versus rare species in the UK…

by Isla and the Biodiversity Critical Thinking Group

Are we in a mass extinction?

This week’s discussion focussed on mass extinctions, specifically asking whether or not we are experiencing a mass extinction right now.

Firstly, it became clear that the term ‘mass extinction’ is not easily defined. Barnosky et al. (2010) define a mass extinction as a loss of >75% of species over a ‘geologically short interval’ (<2 million years). In contrast to Barnosky et al.‘s rigid definition, Sepkoski (1986, p.278) defines a mass extinction merely as a substantial increase in the amount of extinction in more than one geographically wide-spread higher taxon during a relatively short interval of geologic time. Many other definitions exist, these two are merely two ends of a broad spectrum.

 

extinctions

Figure 1 – Adapted from Rohde & Muller (2005). Showing the placement of the five previous mass extinctions and their effect on biodiversity loss in terms of number of genera.

Previous mass extinctions occurred on a time-scale of many millennia. The most dramatic extinction, which marked the Permian-Triassic boundary, destroyed an estimated 90% of life on Earth but took 60,000±48,000 years to accomplish it. 60,000 years is far longer than our records of biodiversity change and so it is difficult to decide what the long-term trends of future biodiversity loss may be. Instead we rely on the fossil record, but fossils are rare and biased towards those species which possess hard fossilisable body parts and are found in environments where fossilisation is a possibility. Elevated trends seen by some researchers may merely be temporary blips that will resolve quickly, or they may indicate a trend that will continue.

Linking back to our earlier discussions on the theme of how biodiverse the Earth actually is and how much we know about those species, we talked about how measures of biodiversity loss are skewed towards charismatic terrestrial vertebrates.  Understudied taxa such as bacteria or micro-arthropods may be going extinct faster than other taxa. Regnier et al. (2015) demonstrated from a random sample of terrestrial gastropods that we may have already lost 7% of known species, while only 0.04% of species have been officially recorded as extinct.

Barnosky et al. (2010) proposed six potential causes of the 6th mass extinction:

  • Co-option of resources from other species
  • Fragmentation of habitats
  • Species invasions
  • Pathogen spread
  • Direct species extermination
  • Global climate change

We concluded that (as always) a combination of factors was most likely to blame, unsurprising. When probed further about which cause had the biggest effect, habitat fragmentation and climate change were equally tied. To fix these two problems however, would require very different approaches.

We discussed the infamous Living Planet Index (LPI) Report and whether it can be used to infer biodiversity’s impending doom (as is the WWF’s wont). The main summary of the report is a graph showing how the index value (essentially an additive measure of population change since 1970) has steadily declined. However, population change ≠ extinction rate. Additionally, the index biases certain taxa (charismatic vertebrates), only 4.8% of the world’s vertebrate species have been analysed, it follows that those species which have been studied sufficiently to be included are also the ones most likely to be experiencing decline due to their contact with humans.

LPI

Figure 2 – From the Living Planet Report 2014. Showing the general decline in vertebrate population abundance, relative to 1970 levels.

To round up our discussion we discussed whether any of us would be willing to say whether we are or aren’t currently in a mass extinction. The overwhelming decision was that we don’t have enough data and it’s too early to tell.

P.S. A tip for other discussion leaders, learn from me and don’t give your charges too much extra reading to do, keep the reading focussed to ensure a good discussion on a few topics rather than briefly touching on lots of points.

By John

The challenges of biodiversity conservation

In our last critical thinking session for this semester we discussed the challenges to biodiversity conservation and the tools that might help us to overcome them. We built upon knowledge gathered from the Conservation Science course and decided to focus on the practical sides of conservation.

Firstly, we explored the role of conservation scientists in the modern world. We had all watched a presentation by Dr. Richard Fuller, of the University of Queensland, in preparation for our discussion. Dr. Fuller talked about how as scientists we have a responsibility to make sure our findings have real world implications, that conservation science informs policies, urban planning and future developments. We brought up the importance of avoiding bias and acknowledged that there is a need for a mediator between academics and policy makers. Nevertheless, we agreed that conservation scientists should collaborate with local people and government officials to offer practical solutions to biodiversity issues. This work, however, involves negotiation, trade-offs and consideration of ecological, social and economic factors. Making decisions in such a multidisciplinary context will benefit from a set framework, which takes into account all aforementioned factors and outlines the scenario which best answers to them.

Marxan

Image source – University of Queensland

 

One such product, developed at the University of Queensland, is MARXAN. MARXAN is a decision-making software which aims to give the best solution to the ‘minimum set problem’ – how do we achieve a certain biodiversity target in the most-cost effective manner. It can be used to highlight areas of high biodiversity value for future designation as protected areas, as well as evaluation of already existing protected area networks. The program uses planning units – a grid of 1x1km squares. The following information is inputted for each planning unit – land use type, species distributions and economic cost of land. We can then select a conservation target (e.g. protecting 25% of the distribution of amphibian species in Tasmania) and the result from running the software is a selection of planning units which meet this criterion for the smallest cost.

Once we were familiarised with how MARXAN works, we discussed the value of this product for the advance of conservation science. Although one can argue that economic cost should not be part of this analysis and biodiversity should be protected regardless of the cost, we all agreed that MARXAN is definitely a step in the right direction. We live in a finite word, where both natural and monetary resources are limited, and using MARXAN helps optimise protected area networks. The only other shortcoming we found was that it uses a proxy measure, boundary length modifier, to assess connectivity. Connectivity between protected areas is key for maintaining viable populations in the long-term. Although the boundary length modifier addresses this issue, we are eager to see if the new version of MARXAN will offer an improved solution.

MARXAN is designed to follow the principles of systematic conservation planning, known as CARE principles.

Comprehensive (Connected) – every type of ecosystem is protected and there is connectivity between the protected areas
Adequate – the conservation target we have chosen is adequate for maintaining ecosystem functionality and integrity
Representative – there is enough of every kind of biodiversity protected, e.g. insurance policy against fires and catastrophic events
Efficient – achieves the above targets for the least amount of money

Discussing efficiency and evaluation of protected areas continued with a critical look at a paper titled ‘Replacing underperforming protected areas achieves better conservation outcomes’ by Fuller et al. (2010). Conservation targets tend to revolve around designating more protected areas and the widely-accepted solution to most biodiversity problems is the expansion of protected area networks. Protecting habitat is seen as a surrogate for protecting biodiversity. The paper we read offers an alternative approach and stresses the importance of evaluation. Although having 13% of the Earth’s land surface under some sort of protection is one of the biggest successes of conservation, it is important to also monitor protected areas. Are they actually protecting biodiversity? Are the populations within them stable? As shown by Fuller et al. (2010) protected areas are not always successful and in that case, is it worth maintaining them as such? Degazetting protected areas has to be done with extreme caution and it is not yet certain for how long we need to monitor them before deciding they are not successful surrogates of biodiversity.

Tasmanian Tree Frog

The Tasmanian tree frog (Litoria burrowsae) is endemic to Tasmania and its populations are declining due to a wide-spread disease. Establishing a protected area network that follows the CARE principles can help boost this species’ resilience. Image source – Wild Side Australia

We all agreed on the importance of evaluation and on using realistic targets. We are studying ecology because we love nature, and we often have an intrinsic desire to protect all of it. This is rarely possible in today’s world of global urbanisation and rapid population growth. For example, if all of the populations of amphibians in Tasmania are to be protected, 99% of the state’s land has to be set aside for conservation. This is obviously not a realistic target, and instead, we ought to focus more on a target that allows humans and nature to co-exist.

Assessing the performance of protected areas is especially important as the effects of climate change are predicted to become more pronounced. Species will shift their ranges, which might place their populations outside of the protected area boundaries. We talked about examples from around the world, including African national parks, of misplacing protected areas. This is indeed a case of ‘so close, yet so far’ – Lisa told us about lions roaming right outside of the fences of the park that was put in place to protect them.

In this session of critical thinking, we set out to discuss how we move forward in conservation. We decided that translating scientific knowledge into real life actions is paramount for effective conservation. This process can be further facilitated by software such as MARXAN, which considers ecological, social and economic factors. Conservation science emerged as a ‘crisis discipline’ and there is a long path ahead before the crisis is averted. It is important to evaluate our actions at each step of the way, as a means to maximise benefit for both people and environment.

By Gergana

Shouldn’t we know by now if biodiversity is increasing or decreasing?

Biodiversity is the variety of species of living organisms of an ecosystem. Biodiversity is thought to boost ecosystem productivity where each species, no matter what shape, size, how rare or abundant they are have a role to play in their ecosystem.

Whether or not biodiversity is increasing or decreasing depends on the timescale in which you are looking as well as the spatial scale (e.g. local, regional, global). This makes it difficult to truly assess whether we have more, less or the same number of species. We based our discussion on: Dornelas et al., 2014, Vellend, et al., 2013, Brooks, et al., 2006, Hooper, et al., 2012 and McGill, et al., 2015.

We analysed the paper by Dornelas et al., 2014 which covered 35,613 species in their study. Although we decided with our critically thinking minds that when you initially read this figure of over 35,000 species it does not seem like that many relative to the 8.7 million eukaryote species there might be on Planet Earth!  However, when put in perspective, we decided that the study does have value. It includes data from more species than a lot of studies to date have managed to compile and realistically you don’t need every single species to draw valuable conclusions. Overall, we thought that it would have been more precise if Dornelas et al. 2014 had just focused on marine studies than to try to include terrestrial species as well, as we weren’t sure how comparable the marine and terrestrial data sets were in these analyses.

A concern of ours was that large geographic regions of the world absent due to data collection bias. This moved us on to discussing the repercussions of sampling biases. In the Vellend paper and Dornelas’ date came mostly from temperate, well developed areas where active ecosystem management has occurred for a number of years as shown in the following two maps.

The first map is from Vellend et al. 2013 and the second from Dornelas et al. 2014 paper showing the areas where spatial gaps in biodiversity data occur. Seeing these gaps made us think about where biodiversity research occurs globally and how certain regions are probably under represented in the ecology literature, such as for example Sub-Saharan Africa. We also pondered how the spatial biases with climate data are similar to those in ecology (third map), and how the same regions of the world are often understudied in a variety of different fields.

Although the results from the underrepresented areas seem to be consistent with the patterns found in well-studied parts of the world.  We were unconvinced that we definitively know what biodiversity changes at local scales might be across the planet as a whole without data from the regions with the fewest studies due to the vastly different ecosystems found in these locations.

An overarching problem in these global analyses is that, even analyses that are as holistic as these, have a bias of data collection in terms of what taxa were sampled. Microorganisms make up the vast majority of the species on earth, yet they are vastly underrepresented in the ecological literature because generally there is less interest in studying them (sorry microbes!). We agreed that we are very appreciative of the folks who go out to study microbial biodiversity out in the real world!  Thanks you guys!

The reduced emphasis on microbial ecology is likely because it is time consuming research, where species identification is very tricky and with many species that carry out the same or similar functions within an ecosystem. For example, if you removed a microbe from a system, it is likely that another will fill the niche and there will be no change in the overall ecosystem function. On the other hand, if you altered the top of a trophic pyramid by removing an apex predator the cascade of effects would be more noticeable. We discussed the concept of functional redundancy (which you can read more about here) and that it could be used to disperse conservation resources based on the usefulness of a species within an ecosystem.

To summarise, many scientists have attempted to track biodiversity increase and decrease and many of them come to different conclusions depending on the scale at which they conduct their study, as shown in the following graph. We agreed that an important thing to take away from this discussion is something highlighted in the Dornelas paper; it is important to think about biodiversity change, and it is not necessarily the increases or decreases that we should care about, but the effect that biotic homogenisation could have on our world.

Biodiversity Change across scales

Hypothesized species richness changes across scales (Myers-Smith et al. unpublished).

Are biodiversity changes the same across different scales or do we see different patterns depending on which scale we look at? (Figure from Isla’s Conservation Science Lecture on Biodiversity Changes)

By Hannah

Experiments lasting multiple lifetimes?

In week 4 of our critical thinking tutorials, we tackled a comprehensive paper by Likens et al., (1970) describing the original Hubbard-Brook experiments. The study aimed to determine the effect of afforestation and herbicide application on:

  • The quantity of stream water flowing out of the watershed
  • Chemical relationships within the forest ecosystem (e.g. nutrient relationships and eutrophication).
The Hubbard Brook Experimental Forest.

The Hubbard Brook Experimental Forest

What did the study do that has never been done before?

In the 1960s, this was the first small watershed approach to study elemental budgets and cycles. Ultimately it is also the first study to have led to the development of a Long Term Ecological Research Network, which now plays a key role for long-term scientific studies in the United States.

We, as a group, were struck by the magnitude of the original study. The fact that the experiments were carried out on an entire watershed ecosystem with multiple watersheds to apply treatments to is incredible. Looking more into the ongoing study now, the study site has been expanded and encompasses even more individual watersheds so that multiple replications of the treatments can be applied. Although this was all impressive, we began to question how representative this study might be for watersheds in other environments with different site characteristics….

How widely should the conclusions of the study be applied?

We noticed the Hubbard Brook watersheds have certain site characteristics that may not be present in other watersheds throughout the world. For example, the area is surrounded by northern hardwood vegetation and the streams are located on watertight bedrock. Different site characteristics may influence the stream characteristics being measured. We acknowledged the difficulty of carrying out global experimentation and the lack of funds, which calls for a reasoned reduction in the number of study sites, perhaps focusing on ecologically important areas. Generally, no site can be representative of an entire region but extrapolation of local patterns to global scales is central to the application of research insights and policy-making.

More points on the study design…

We discussed what we would change if we could carry out the Hubbard Brooks experiment again. In general more samples of stream water and precipitation should be taken in time and space. More replicates of the actual watersheds would also be needed to ensure there was no pseudoreplication (see last week’s blog post!). We then began questioning the reason for the herbicide application and how this would affect the results. We also questioned whether the prevention of regrowth would prevent the ecosystem’s capacity to buffer the effects of deforestation.

Buffer Capacity and LTER

How important is the environment’s ability to buffer against disturbances? We stressed the importance of the buffering capacity of ecosystems for the future in light of climate change and ongoing human disturbance.

To monitor changes within ecosystems, such as stream flow and nutrient budgets, we discovered the Long Term Ecological Research Network. This is a network of study sites that are being monitored over several decades. The aim is to extract trends from data sets put together using common protocols and approaches. Originally we thought this was a great idea but questioned the feasibility of conducting such research over the world as a whole.

LTERnetwork

The US Long-term Ecological Research Network

Long-term trends can be valuable for testing individual effects of various factors (e.g. climate change and habitat destruction) and could be useful for policy making in the future. However, delving into the research network further, we discovered that the majority of funding comes from the National Science Foundation, and the grants are given in a 6-year renewal format. It is more difficult than we initially thought to get long-term funding, with the threat of being cut off after 6 years, which would significantly reduce the value of the long-term time series as a whole.

Overall, we did find the concept of a long-term research network useful and would encourage the idea of having something similar in Europe. Especially since the main aims of the American version are:

  • Make data more accessible to the broader scientific community using common data management protocols, cross-site and cross-agency research
  • Participate in network level and science synthesis activities

Perhaps the LTER concept can, if adopted throughout the world, contribute to developing more effective global data sets that will have generic data collection methods and metrics. In the future, this collaboration could reap major rewards for the analysis of long-term ecological data, which could result in more focused science and effective policy-making.

By Lisa

Pseudo-replication and experimental design

The third week of discussion for our critical thinking group involved thinking about experimental design and statistical analysis.

We read the paper:

Hurlbert, S.H. (1984). Pseudo-replication and the Design of Ecological Field ExperimentsEcological Monographs. 54, p.187-211.

The questions that were raised included:

  • What is pseudo-replication and how is it avoided?
  • What are the different types of pseudo-replication?
  • How do you balance perfect experimental design with feasibility in ecology?

We started off the discussion by summarising the paper, and then delved straight into the questions. The main take home messages from the discussion were as follows:

1. The term pseudo-replication was coined by Hurlbert to refer to “the use of inferential statistics to test for treatment effects with data from experiments where either treatments are not replicated (though samples may be) or replicates are not statistically independent.” making it a classic paper within ecology. We found this definition hard to comprehend straight off the bat so discussed the best explanation of pseudo-replication for a 15 year old at school. Our result can be seen below:

“Imagine you want to find out if temperature of fish tank affects the lifespan of goldfish. You get two fish tanks (each one of these fish tanks we call a replicate), one 10°C and one at 20°C. You put five fish eggs in each tank and wait. When it comes to trying to answer your question, you might find that on average the fish in the colder tank lived longer, great! But, because you only had two fish tanks, really you just found the effect between the two different tanks, and you still don’t know if this is due to the difference in temperatures between them. To fix this, you could have five fish tanks at each of the two temperatures, generating a mean for each temperature and an estimate how much difference there was between the fish tanks at each temperature. With all of this information you can figure out how much of the difference in lifespan was due to temperature versus the different tanks.”

2. Next we discussed the three types of pseudo-replication and their differences;

  • Simple pseudo-replication occurs when there is one experimental unit per treatment. Inferential statistics cannot separate variability due to treatment from variability due to experimental units when there is only one measurement per unit.
  • Temporal pseudo-replication is similar to simple pseudo-replication but it occurs when experimental units differ enough in time that temporal effects among units are likely, and treatment effects are correlated with temporal effects. Inferential statistics cannot separate variability due to treatment from variability due to experimental units when there is only one measurement per unit.
  • Sacrificial pseudo-replication occurs when the means within a treatment are used in an analysis, and these means are tested over the within unit variance. We decided this would be the easiest to account for in statistical analysis than the others because sacrificial pseudo-replication only occurs when carrying out the statistics therefore you just need to redo the statistics instead of the whole experiment unlike simple pseudo-replication.

We thought the paper was clear and concise with the examples referring to specific experiments making it easy to visualise and understand the concept of pseudo-replication. A particularly good figure can be seen below:
HurlbertFig3. Finally we talked about how to balance perfect experimental design with feasibility in ecology. We came to the conclusion the “perfect” experimental design is unrealistic within ecology as the environment is constantly changing and there will always be unknowns that cannot be taken into account or measured. If a scientist was set on creating the “perfect” experiment the study would never be put into practice and no knowledge would be gained making the process pointless. Rational thinking needs to be exerted taking into account the facts and variables known as well as the budget and time constraints present to create a balance of these two factors.

Overall a very enthusiastic discussion was had with everyone contributing with their unique views. Looking forward to the next one!

by Claudia

Note from Isla: At the end of the discussion, I also gave a brief intro to hierarchical statistical modelling (mixed models and super briefly Bayesian approaches) as a way to deal with temporal and spatial pseudo replication in many experimental designs.

On climax communities and neutral theory

We started our second session of Critical Thinking with a discussion on Gleason’s 1927 paper on the succession concept. Similarly to the previous older papers we talked about, we noted the differences in writing style compared to how papers are written today. In opposing the previous ideas on succession by Clements, Gleason often uses rather harsh language. He does provide multiple examples as evidence for his theory. We compared both views and discussed the differences.

Clements sees vegetation as a whole organism that goes through different life stages – youth, maturity, death. He talks about a ‘climax’ community – the state to which the organism is ultimately headed and where equilibrium will be achieved. Here, we defined equilibrium as the state where birth rates and mortality are equal, competition is shaping the community and there is usually one dominating species. We discussed whether in todays dynamically changing world climax communities can exist and provided two possible examples – the remnant patch of old growth rainforest Daintree in Australia and boreal forests in places like Northern Canada and Alaska.

Gleason puts forward the idea that disturbance prevents ecosystems from ever reaching the climax community. We agreed that this is especially true today, when the consequences of human activities can dominate natural processes.

The next paper we focused on was Hubbell’s “Tree Dispersion, Abundance, and Diversity in a Tropical Dry Forest”. He examined tree distribution on the Barro Colorado Island in Panama. This is a man-made island in the Panama Canal famous for its research on rainforest ecology. Based on his observations at Barro Colorado, Hubbell went on to postulate his Neutral Theory. According to Neutral Theory, biodiversity arises at random, as each species fallows a random walk. At each trophic level species are considered equal, e.g. the differences between their birth and death rates are neutral. In effect, neutral theory is the null hypothesis to niche theory where species evolve to fill a certain niche and function.

Neutral theory is proposed to be most valid when the system is ‘saturated’, e.g. in high biodiversity areas such as coral reefs and tropical rainforests. However, the theory remains controversial as there has been evidence put forward by Dornelas et al. (2006) indicating that coral reef diversity refutes it.

To finish off, we talked about why there are so many species of trees in tropical forests. We discussed the following possibilities and in the end agreed that it is most likely a combination of factors acting simultaneously.

  • Neutral theory – thought to work in super high diversity areas like rainforests (‘saturated’)
  • Niche theory – finely split niches in rainforests, more species evolve to fill them
  • Stable benign climate – rainforests tend to be around the Equator where climate has remained relatively constant, e.g. no glaciations; low disturbance, low extinction rates, high speciation rates
  • Janzen-Connell hypothesis – density-dependent mortality, diseases and pests concentrate near the parent, so surviving sapling are spaced further apart, supposed to decrease clumping and increase diversity
  • Energy – rainforests are high energy zones, lots of sunlight and rain, increased photosynthesis; higher energy -> more biomass -> increased rates of evolution?
  • History and Dispersal – valid on a more local scale; relate to plate tectonics, continental drift; dispersal of species

By Gergana

Note from Isla: We also chatted about Ecological Societies and the Scientific Literature.  We talked about the different journals that the papers we are reading have been published in such as Science, Nature, Ecology, American Naturalist, Ecological Monographs, etc. And who publishes Science, Nature and the major ecology journals – Scientific societies or publishing companies.  We briefly mentioned the concept of a journal impact factor and how we decide which journals are cooler than others.  We discussed how the British Ecological Society is holding it’s annual meeting in Edinburgh this December, which will be an exciting opportunity to check out the latest in ecological research right in our own city!

Why are there so many kinds of animals?

In the first discussion for the “Biodiversity” group in the 2015 Critical Thinking in Ecology course at the University of Edinburgh, we tackled the theme of “The Diversity of Life”.

We read the papers:

Hutchinson, G. E. (1959) Homage to Santa Rosalia or why are there so many kinds of animals? American Naturalist, 145-159.
Hutchinson, G. & MacArthur, R. (1959) A theoretical ecological model of size distributions among species of animals. American Naturalist, 117-125.

And asked ourselves the question: “Why are there so many kinds of life out there on planet earth?”

We started the discussion off with introductions where we each described our favourite living organism to get us started: the list spanned the majestic killer whale to the often much maligned grey squirrel – that is really very cute despite its invasive tendencies.

Then we delved deeper into the science.  The major take-home messages of our discussion were as follows:

  1. If you want to start to understand the key concepts in the field of ecology, first go back and read a foundational paper or two like Hutchinson’s Address to the American Society of Naturalists from 1959 and then check out the wikipedia pages for concepts like ecological niche, limiting similarity, body size and species richness, or macroecology
  2. Theoretical papers should be made to be easier to interpret for a non-mathematical audience, but equally, those of us that are afraid of equations should work harder to understand them.  Hutchinson and MacArthur’s model of size distributions among species of animals isn’t so complicated at all, if it is explained well!

    equation

    From Hutchinson and MacAurthur, 1959

  3. Ecology was different back in the day with a different writing style in papers, fewer references, regression lines fitted “by eye” rather than by complex statistics and close connections between different researchers – Hutchinson was the PhD supervisor of MacAurthur and Slobodkin who he thanks in his acknowledgements for providing him with “their customary kinds of intellectual stimulation” and then he says: “To all these friends I am most grateful”.

So, why are there so many kinds of animals? Well, I am not sure that over 50 years on from Hutchinson’s paper, we know for sure, but the answer probably has something to do with niches, evolution and macroecological patterns of the diversity of life on planet earth.

It was a very jolly first discussion – thanks everyone for your participation.  I am already looking forward to our next tutorial group meeting!

by Isla