A Q&A with The Atlantic’s Ed Yong

The author of a New York Times bestseller on microbes talks to us about science writing and blogging.

Advertisements

Ed-Yong-1

[This post originally appeared on Science Borealis.]

Following his recent keynote address at the Canadian Society of Microbiology conference in Waterloo, Ontario, my Science Borealis colleague, Robert Gooding Townsend and I chatted with Ed Yong, author of the New York Times bestseller, I Contain Multitudes, about getting started in science communication, using humour in your writing, and whether science blogging is dead, among other topics. Here is that conversation, edited for brevity and clarity.

RGT: You were a judge at BAHfest [a science comedy event], and you use humour in your work all the time. I’m wondering, how do you see humour as fitting into the work that you do?

EY: It’s never an explicit goal. I’m not sitting here thinking to myself, “I need to be funny”, but I think I’ve always taken quite an irreverent approach to most of the things I do, science writing included. I think that sometimes there is a view that science writing should be austere because science is a little bit like that, and I think that’s ridiculous. It benefits from a light touch, just as anything else does, and that’s sort of the approach that I very naturally gravitate towards, and not taking things too seriously.

EZ: Your own blog has now ended, and notably, the SciLogs network has been shut down over the last year. I was wondering if you think that the heyday of science blogging is over now and do you feel that newsletters like yours may be the next big thing for sci-comm people, despite being somewhat less interactive than blogs?

EY: You know, there’s been a lot said about the death of blogs. I don’t necessarily think that’s true. I think that in the goodbye post to my blog I noted that I thought that blogs had just shifted into a new guise. The Atlantic was always a pioneer in fusing that kind of bloggy voice with the traditional rigours of journalism, and even back when I was blogging, before I joined the magazine, much of what I was seeing online had much the same depth of personality and voice that the best blogging I read had. So, I think those worlds have always been colliding, and I think that collision is now well underway, and a lot of what made blogs special has been absorbed into mainstream organizations. […] I think the spirit of blogging is very much still alive, and as for newsletters, I don’t see these things as the same. They’re different media. I started my newsletter just as a way of telling people about the work I was doing elsewhere. You know, the newsletter is not a blog; I’m not producing any original content for it, I’m just using it as a signpost so people can find my other work.

EZ: When you say that the voice that you see in mainstream media has become more like blogging, do you feel that it’s gotten a little less formal and more conversational?

EY: I don’t mean that it has across the board, but I think there are definitely publications like the Atlantic that have fused those things together. We have space for a lightness of touch in the way we write, provided that we’re still upholding the strictest journalistic ethics and standards.

RGT: One of the things you do very well in your work is that you maintain a high standard of journalistic integrity. That’s incredibly important, but I also wanted to probe how you face these challenges in science journalism. For example, there’s the question of how much are you trying to do good in your reporting, and how political an action is that? How do journalistic and scientific impartiality differ, and do you get caught in between?

EY: I think as journalists, we are not crusaders. We are definitely not advocates. The objective of my work is not to celebrate science, it’s not to get people on board with science, whatever that might mean. I’m not calling for more science funding or anything of the kind. What I am doing is telling people what is going on. I think obviously, all journalists have their own biases and their own starting points from which they approach the world. But I think we’re not chasing some artificial standard of neutrality. I think what instead is more important is being fair. […] If I’m going to write a piece slagging off one particular approach to science or debunking a lab’s work, I’m going to reach out to those people for their comments as well. I think that very much is just part of what we do. I don’t see any conflict there.

EZ: You’ve talked in the past about the importance to your career of having won the Daily Telegraph‘s Young Science Writer prize in 2007. I was wondering, now that both that contest and the Guardian Wellcome Trust are no more, do you have any advice for a beginning science writer to get their writing out there and build a reputation?

EY: That prize was important to me, but if you talk to a wide variety of science writers, you’ll see that there’s not any one route for getting into the field. […] If you do a Google search for “On the Origin of Science Writers“, you’ll find a page on my now-defunct blog which collects stories from different science writers about how they got into the business. It’s a useful resource, and I think one thing you’ll notice when you read those stories is that there is no single route that everyone takes. It’s all very, very diverse. So I think the critical thing with the competition was that it forced me to write and prove myself. Now, there are different ways you can do that, but the really important thing is that, if you’re interested in making a start in this business, you need to actually start producing things. You need to start writing. My advice to people who say that they’re aspiring science writers is that there really is no such thing – you’re either currently writing about science, right now, in which case you are a science writer, or you are not, in which case you’re not.

RGT: In your recent keynote address at the University of Waterloo, you talked about the representation that you strive for in your stories, in terms of the gender balance and racial background of your sources. This is an important thing in terms of who we see when we see people in science. One thing that you haven’t talked about, at least there, is how this has affected you personally, as a person of colour. Are you comfortable saying anything about how that may have affected your progression in science journalism?

EY: I’m not sure I have anything massively useful to say. I wouldn’t say I’ve experienced obvious discrimination on the basis of that, and I’m not sure it’s affected my career in any particular way. It is something you bear in mind. When I go to journalism conferences and sci-comm meetings, these are still largely white spaces, and that does make a difference in terms of how you feel as a part of that community. In Britain, there is another science writer called Kevin Fong, who’s amazing and does a lot of radio and TV work. Kevin and I have this running joke between us, because we’ll turn up at events, and people will come up to me, and say congratulations, for the thing that Kevin did, and vice versa, because the idea of TWO East Asian science writers working in the same spaces is just clearly too much for some people to comprehend, so you know, there is that, and it’s not fun, and it happens often enough to be annoying. It’s a very mild example of the kinds of problems that can happen when you have a lack of diversity in a field, and that’s certainly the case with science journalism as much as it is for science itself.

RGT: You’ve talked about finding stories about topics that most people find very boring, and turning them into this very interesting and engaging story. If that’s a power that you have, does it come with responsibilities? For example, if you think that climate change is an overwhelming threat, then distracting from that is detrimental. Where do your thoughts lie with that?

EY: I actually really don’t buy this argument at all. I think there’s no department of ranking all the world’s problems in order and then dealing with them one at a time. That’s not how people think about problems. That’s not how people react to the world around them. If we worked in that way, then, for example, I might never write about anything other than, say, the rise of fascism or climate change, or antibiotic-resistant bacteria, or what have you. [I don’t think we write or read about things] because they are necessarily important or they pose existential threats. I think we write about things because they are interesting to us.

EZ: Thank you very much for your time today.

EY: Thanks, guys. Pleasure talking to you.

What’s in a Name?

Part Two: How’s Your Latin?

295738_obamadon_f
The awesomely named Obamadon gracilis.  Image: Reuters

What do Barack Obama, Marco Polo, and the band Green Day have in common? They all have at least one organism named after them. Obama has several, including a bird called Nystalus obamai and an extinct reptile named Obamadon gracilis. Green Day’s honorary organism is the plant Macrocarpaea dies-viridis, “dies-viridis” being Latin for “green day.” Many scientists also have species named after them, usually as recognition for their contributions to a field. My own PhD advisor, Dr. Anne Bruneau, has a genus of legumes, Annea, named after her for her work in legume systematics.

Nashi_pear
“Pear-leaved Pear”   Photo via Wikimedia Commons

Scientific names, which are colloquially called Latin names, but which often draw from Greek as well, consist of two parts: the genus, and the specific epithet. The two parts together are called the species. Though many well-known scientists, celebrities, and other note-worthies do have species named after them, most specific epithets are descriptive of some element of the organism or its life cycle. Many of these are useful descriptions, such as the (not so bald) bald eagle, whose scientific name is the more accurate Haliaeetus leucocephalus, which translates to “white-headed sea eagle.” (See here for some more interesting examples.) A few are just botanists being hilariously lazy with names, as in the case of Pyrus pyrifolia, the Asian pear, whose name translates as “pear-leaved pear.” So we know that this pear tree has leaves like those of pear trees. Great.

In contrast to common names, discussed in our last post, Latin names are much less changeable over time, and do not have local variants. Soybeans are known to scientists as Glycine max all over the world, and this provides a common understanding for researchers who do not speak the same language. Latin is a good base language for scientific description because it’s a dead language, and so its usage and meanings don’t shift over time the way living languages do. Until recently, all new plant species had to be officially described in Latin in order to be recognized. Increasingly now, though, descriptions in only English are being accepted. Whether this is a good idea remains to be seen, since English usage may shift enough over the years to make today’s descriptions inaccurate in a few centuries’ time.

This isn’t to say that scientific names don’t change at all. Because scientific names are based in organisms’ evolutionary relationships to one another (with very closely related species sharing a genus, for example), if our understanding of those relationships changes, the name must change, too. Sometimes, this causes controversy. The most contentious such case in the botanical world has been the recent splitting of the genus Acacia.

acacia
The tree formerly known as Acacia. Via: Swahili Modern

Acacia is/was a large genus of legumes found primarily in Africa and Australia (discussed previously on this blog for their cool symbiosis with ants). In Africa, where the genus was first created and described, the tree is iconic. The image of the short, flat-topped tree against a savanna sunset, perhaps accompanied by the silhouette of a giraffe or elephant, is a visual shorthand for southern Africa in the popular imagination, and has been used in many tourism campaigns. The vast majority of species in the genus, however, are found in Australia, where they are known as wattles. When it became apparent that these sub-groups needed to be split into two different genera, one or the other was going to have to give up the name. A motion was put forth at the International Botanical Congress (IBC) in Vienna in 2005 to have the Australian species retain the name Acacia, because fewer total species would have to be renamed that way. Many African botanists and those with a stake in the acacias of Africa objected. After all, African acacias were the original acacias. The motion was passed, however, then challenged and upheld again at the next IBC in Melbourne in 2011. (As a PhD student in legume biology at the time, I recall people having firm and passionate opinions on this subject, which was a regular topic of debate at conferences.) It is possible it will come up again at this year’s IBC in China. Failing a major turnaround, though, the 80 or so African acacias are now known as Vachellia, while the over one thousand species of Australian acacias continue to be known as Acacia.

The point of this story is, though Latin names may seem unchanging and of little importance other than a means of cataloguing species, they are sometimes both a topic of lively debate and an adaptable reflection of our scientific understanding of the world.

Do you have a favourite weird or interesting Latin species name? Make a comment and let me know!

What’s in a Name?

Part One: Common vs. Scientific Names

img_3146-staghornsumac

When I was a kid growing up on a farm in southwestern Ontario, sumac seemed to be everywhere, with its long, spindly stems, big, spreading compound leaves, and fuzzy red berries. I always found the plant beautiful, and had heard that First Nations people used the berries in a refreshing drink that tastes like lemonade (which is true… here’s a simple recipe). But often, we kids were warned by adults that this was “poison sumac,” not to be touched because it would give us itchy, burning rashes, like poison ivy did. In fact, plenty of people would cut down any nascent stands to prevent this menace from spreading. We were taught to fear the stuff.

 

Toxicodendron_vernix#1978a#2_400
THIS is the stuff you need to look out for. Via The Digital Atlas of the Virginia Flora

It was many years later before I learned that the red-berried sumacs I grew up with were not only harmless, but were also not closely related to the poisonous plant being referred to, which, as it turns out, has white berries and quite different leaves. Scientifically speaking, our innocent shrub is Rhus typhina, the staghorn sumac, while the rash-inducing plant is called Toxicodendron vernix. Not even in the same genus. Cautious parents were simply being confused by the similarity of the common names.

 

This story illustrates one of the ironies of common names for plants (and animals). Though they’re the way nearly everyone thinks of and discusses species, they’re without a doubt the most likely to confuse. Unlike scientific (Latin) names, which each describe a single species and are, for the most part, unchanging, a single common name can describe more than one species, can fall in and out of use over time, and may only be used locally. Also important to note is that Latin names are based on the taxonomy, or relatedness, of the species, while common names are usually based on either appearance, usage, or history.

 

This isn’t to say that common names aren’t valuable. Because common names describe what a plant looks like or how it is used, they can convey pertinent information. The common names of plants are also sometimes an important link to the culture that originally discovered and used the species, as in North America, where native plants all have names in the local languages of First Nations people. It seems to me, although I have no hard evidence to back it up, that these original names are now more often being used to form the Latin name of newly described species, giving a nod to the people who named it first, or from whose territory it came.

 

One high profile case of this in the animal world is Tiktaalik roseae, an extinct creature which is thought to be a transitional form (“missing link”) between fish and tetrapods. The fossil was discovered on Ellesmere Island in the Canadian territory of Nunavut, and the local Inuktitut word “tiktaalik”, which refers to a type of fish, was chosen to honour its origin.

 

But back to plants… Unlike staghorn sumac and poison sumac, which are at least in the same family of plants (albeit not closely related within that family), sometimes very distinct species of plants can end up with the same common name through various quirks of history. Take black pepper and bell or chili peppers. Black pepper comes from the genus Piper, and is native to India, while hot and sweet peppers are part of the genus Capsicum. Botanically, the two are quite distantly related. So why do they have the same name? Black pepper, which bore the name first, has been in use since ancient times and was once very highly valued. The confusion came about, it would seem, when Columbus visited the New World and, finding a fruit which could be dried, crushed, and added to food to give it a sharp spiciness, referred to it as “pepper” as well.

Sa-pepper
A black peppercorn. Easy to confuse with a chili pepper, I guess? Via: Wikimedia Commons

 

Another interesting, historically-based case is that of corn and maize. In English-speaking North America, corn refers to a single plant, Zea mays. In Britain and some other parts of the Commonwealth, however, “corn” is used to indicate whatever grain is primarily eaten in a given locale. Thus, Zea mays was referred to as “Indian corn” because it was consumed by native North Americans. Over time, this got shortened to just “corn”, and became synonymous with only one species. Outside of Canada and the United States, the plant is referred to as maize, which is based on the original indigenous word for the plant. In fact, in scientific circles, the plant tends to be called maize even here in North America, to be more exact and avoid confusion.

 

1024px-Spanish_moss_at_the_Mcbryde_Garden_in_hawaii
Not Spanish, not a moss. Via: Wikimedia Commons

And finally, for complete misinformation caused by a common name, you can’t beat Spanish moss. That wonderful gothic stuff you see draped over trees in the American South? That is neither Spanish, nor a moss. It is Tillandsia usneoides, a member of the Bromeliaceae, or pineapple family, and is native only to the New World.

 

And that wraps up my very brief roundup of confusing common names and why they should be approached with caution. In part two, I’ll discuss Latin names, how they work, and why they aren’t always stable and unchanging, either.

 

There are SO many more interesting and baffling common names out there. If you know of a good one, let me know in the comments!

 

*Header image via the University of Guelph Arboretum

Forever Young

How Evolution Made Baby-faced Humans & Adorable Dogs

human_development_neoteny_body_and_head_proportions_pedomorphy_maturation_aging_growth

Who among us hasn’t looked at the big round eyes of a child or a puppy gazing up at us and wished that they’d always stay young and cute like that? You might be surprised to know that this wish has already been partially granted. Both you as an adult and your full-grown dog are examples of what’s referred to in developmental biology as paedomorphosis (“pee-doh-mor-fo-sis”), or the retention of juvenile traits into adulthood. Compared to closely related and ancestral species, both humans and dogs look a bit like overgrown children. There are a number of interesting reasons this can happen. Let’s start with dogs.

When dogs were domesticated, humans began to breed them with an eye to minimizing the aggression that naturally exists in wolves. Dogs that retained the puppy-like quality of being unaggressive and playful were preferentially bred. This caused certain other traits associated with juvenile wolves to appear, including shorter snouts, wider heads, bigger eyes, floppy ears, and tail wagging. (For anyone who’s interested in a technical explanation of how traits can be linked like this, here’s a primer on linkage disequilibrium from Discover. It’s a slightly tricky, but very interesting concept.) All of these are seen in young wolves, but disappear as the animal matures. Domesticated dogs, however, will retain these characteristics throughout their lives. What began as a mere by-product of wanting non-aggressive dogs has now been reinforced for its own sake, however. We love dogs that look cute and puppy-like, and are now breeding for that very trait, which can cause it to be carried to extremes, as in breeds such as the Cavalier King Charles spaniel, leading to breed-wide health problems.

cavalier-king-charles-spaniel-blenheim-dog-tag-id-image-nm
An undeniably cute Cavalier King Charles spaniel, bred for your enjoyment. (Via Wikimedia Commons)

Foxes, another type of wild dog, have been experimentally domesticated by scientists interested in the genetics of domestication. Here, too, as the foxes are bred over numerous generations to be friendlier and less aggressive, individuals with floppy ears and wagging tails – traits not usually seen in adult foxes – are beginning to appear.

But I mentioned this happening in humans, too, didn’t I? Well, similarly to how dogs resemble juvenile versions of their closest wild relative, humans bear a certain resemblance to juvenile chimpanzees. Like young apes, we possess flat faces with small jaws, sparse body hair, and relatively short arms. Scientists aren’t entirely sure what caused paedomorphosis in humans, but there are a couple of interesting theories. One is that, because our brains are best able to learn new skills prior to maturity (you can’t teach an old ape new tricks, I guess), delayed maturity, and the suite of traits that come with it, allowed greater learning and was therefore favoured by evolution. Another possibility has to do with the fact that juvenile traits – the same ones that make babies seem so cute and cuddly – have been shown to elicit more helping behaviour from others. So the more subtly “baby-like” a person looks, the more help and altruistic behaviour they’re likely to get from those around them. Since this kind of help can contribute to survival, it became selected for.

chimpanzee-mastiff-dog-friends-10
You and your dog, essentially. (Via The Chive)

Of course, dogs and humans aren’t the only animals to exhibit paedomorphosis. In nature, the phenomenon is usually linked to the availability of food or other resources. Interestingly, both abundance and scarcity can be the cause. Aphids, for example, are a small insect that sucks sap out of plants as a food source. Under competitive conditions in which food is scarce, the insects possess wings and are able to travel in search of new food sources. When food is abundant, however, travel is unnecessary and wingless young are produced which grow into adulthood still resembling juveniles. Paedomorphosis is here induced by abundant food. Conversely, in some salamanders, it is brought on by a lack of food. Northwestern salamanders are typically amphibious as juveniles and terrestrial as adults, having lost their gills. In high elevations where the climate is cooler and a meal is harder to come by, many of these salamanders remain amphibious, keeping their gills throughout their lives because aquatic environments represent a greater chance for survival. In one salamander species, the axolotl (which we’ve discussed on this blog before), metamorphosis has been lost completely, leaving them fully aquatic and looking more like weird leggy fish than true salamanders.

axolotl_ganz
An axolotl living the young life. (Via Wikimedia Commons)

So paedomorphosis, this strange phenomenon of retaining juvenile traits into adulthood, can be induced by a variety of factors, but it’s a nice demonstration of the plasticity of developmental programs in living creatures. Maturation isn’t always a simple trip from point A to point B in a set amount of time. There are many, many genes at play, and if nature can tweak some of them for a better outcome, evolution will ensure that the change sticks around.

Sources

*Header image by: Ephert – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=39752841

Redesigning Life

John Parrington’s new book sets the stage for an informed debate on genetic modification

redesigning-life_cover

This post originally appeared on Science Borealis

“Imagine if living things were as easy to modify as a computer Word file.” So begins John Parrington’s journey through the recent history and present-day pursuits of genetic modification in Redesigning Life. Beginning with its roots in conventional breeding and working right up to the cutting edge fields of optogenetics, gene editing, and synthetic biology, the book is accessible to those with some undergraduate-level genetics, or secondary school biology and a keen interest in the subject. This audience will be well served by a book whose stated goal is to educate the public so that a proper debate can take place over the acceptable uses of genome editing.

 

Parrington doesn’t shy away from the various ethical concerns inherent in this field. While he points out, for example, that many fears surrounding transgenic foods are the result of sensational media coverage, he also discusses the very real concerns relating to issues such as multinational companies asserting intellectual property rights over living organisms, and the potential problems of antibiotic resistance genes used in genetically modified organisms (GMOs). Conversely, he discusses the lives that have been improved with inventions such as vitamin A-enriched “golden rice”, which has saved many children from blindness and death due to vitamin deficiencies, and dairy cattle that have been engineered to lack horns, so they can be spared the excruciating process of having their horn buds burned off with a hot iron as calves. These are compelling examples of genetic modification doing good in the world.

 

This is Parrington’s approach throughout the book: both the positive and negative potential consequences of emerging technologies are discussed. Particular attention is paid to the pain and suffering of the many genetically modified animals used as test subjects and models for disease. This cost is weighed against the fact that life-saving research could not go ahead without these sacrifices. No conclusions are drawn, and Parrington’s sprawling final chapter, devoted solely to ethics, is meandering and unfocussed, perhaps reflecting the myriad quagmires to be negotiated.

 

Weaving in entertaining and surprising stories of the scientists involved, Parrington frequently brings the story back to a human level and avoids getting too bogged down in technical details. We learn that Gregor Mendel, of pea-breeding fame, originally worked with mice, until a bishop chastised him for not only encouraging rodent sex but watching it. Mendel later commented that it was lucky that the bishop “did not understand that plants also had sex!” We’re told that Antonie van Leeuwenhoek, known as the father of microscopy, was fond of using himself as a test subject. At one point, he tied a piece of stocking containing one male and two female lice to his leg and left it for 25 days to measure their reproductive capacity. Somewhat horrifyingly, he determined that two breeding females could produce 10,000 young in the space of eight weeks.

 

The applications of the fast moving, emerging technologies covered in Redesigning Life will astound even those with some familiarity with modern genetics. The new field of optogenetics, for example, uses light-sensitive proteins such as opsins to trigger changes in genetically modified neurons in the brain when light is shone upon them. In a useful, yet nevertheless disturbing proof-of-concept experiment, scientists created mind-controlled mice, which, at the flick of a switch, can be made to “run in circles, like a remote-controlled toy.” More recently, sound waves and magnetic fields have been used to trigger these reactions less invasively. This technique shows potential for the treatment of depression and epilepsy.

 

The book goes into some detail about CRISPR/CAS9 gene editing, a process that has the potential to transform genetic modification practices. This system is efficient, precise, broadly applicable to a range of cell types and organisms, and shortens the research timeline considerably compared to traditional methods of creating GMOs. It underpins most of the other technologies discussed in the book, and its applications seem to be expanding daily. In the words of one of its developers, Jennifer Doudna, “Most of the public does not appreciate what is coming.” These words could be applied to almost any technology discussed in this book. Already within reach are so-called “gene drive” technologies, which could render populations of malaria-bearing mosquitos – or any other troublesome species – sterile, potentially driving them to extinction, albeit with unknown ancillary consequences. Researchers have also developed a synthetic genetic code known as XNA, which sports two new nucleotides and can code for up to 172 amino acids, as opposed to the usual 20. Modifying organisms to contain XNA opens up the possibility of creating proteins with entirely novel functions, as well as the tantalizing prospect of plants and animals that are entirely immune to all current viruses, due to the viruses’ inability to hijack a foreign genetic code for their own uses.

 

While the book touches on agriculture, its main preoccupation is medical research. Despite many of the therapies covered being far from ready for use in humans, one can’t help but feel that a revolution in the treatment of diseases, both infectious and genetic, is at hand. Only a year ago, gene editing was used to cure a baby girl of leukemia by engineering her immune system to recognize and attack her own cancerous cells. In the lab, the health of mice with single gene disorders such as Huntington’s disease and Duchenne muscular dystrophy is being significantly improved. Writing in 1962 in his book The Genetic Code, Isaac Asimov speculated that someday “the precise points of deficiency in various inherited diseases and in the disorders of the cell’s chemical machinery may be spotted along the chromosome.” Some 54 years later, we have the technology not only to spot these points but to fix them as precisely as a typo in a manuscript.

An Inconvenient Hagfish

On the importance of intermediates.

1280px-eptatretus_stoutii

We think of scientific progress as working like building blocks constantly being added to a growing structure, but sometimes a scientific discovery can actually lead us to realize that we know less than we thought we did. Take vision, for instance. Vertebrates (animals with backbones) have complex, highly-developed “camera” eyes, which include a lens and an image-forming retina, while our invertebrate evolutionary ancestors had only eye spots, which are comparatively very simple and can only sense changes in light level.

At some point between vertebrates and their invertebrate ancestors, primitive patches of light sensitive cells which served only to alert their owners to day/night cycles and perhaps the passing of dangerous shadows, evolved into an incredibly intricate organ capable of forming clear, sharp images; distinguishing minute movements; and detecting minor shifts in light intensity.

584px-diagram_of_eye_evolution-svg
Schematic of how the vertebrate eye is hypothesized to have evolved, by Matticus78

In order for evolutionary biologists to fully understand when and how this massive leap in complexity was made, we need an intermediate stage. Intermediates usually come in the form of transitional fossils; that is, remains of organisms that are early examples of a new lineage, and don’t yet possess all of the features that would later evolve in that group. An intriguing and relatively recent example is Tiktaalik, a creature discovered on Ellesmere Island (Canada) in 2004, which appears to be an ancestor of all terrestrial vertebrates, and which possesses intermediate characteristics between fish and tetrapods (animals with four limbs, the earliest of which still lived in the water), such as wrist joints and primitive lungs. The discovery of this fossil has enabled biologists to see what key innovations allowed vertebrates to move onto land, and to precisely date when it happened.

There are also species which are referred to as “living fossils”, organisms which bear a striking resemblance to their ancient ancestors, and which are believed to have physically changed little since that time. (We’ve actually covered a number of interesting living fossils on this blog, including lungfish, Welwitschia, aardvarks, the platypus, and horseshoe crabs.) In the absence of the right fossil, or in the case of soft body parts that aren’t usually well-preserved in fossils, these species can sometimes answer important questions. While we can’t be certain that an ancient ancestor was similar in every respect to a living fossil, assuming so can be a good starting point until better (and possibly contradictory) evidence comes along.

So where does that leave us with the evolution of eyes? Well, eyes being made of soft tissue, they are rarely well preserved in the fossil record, so this was one case in which looking at a living fossil was both possible and made sense.

Hagfish, which look like a cross between a snake and an eel, sit at the base of the vertebrate family tree (although they are not quite vertebrates themselves), a sort of “proto-vertebrate.” Hagfish are considered to be a living fossil of their ancient, jawless fish ancestors, appearing remarkably similar to those examined from fossils. They also have primitive eyes. Assuming that contemporary hagfishes were representative of their ancient progenitors, this indicated that the first proto-vertebrates did not yet have complex eyes, and gave scientists an earliest possible date for the development of this feature. If proto-vertebrates didn’t have them, but all later, true vertebrates did, then complex eyes were no more than 530 million years old, corresponding to the time of the common ancestor of hagfish and vertebrates. Or so we believed.

hagfish
The hagfish (ancestors) in question.  Taken from: Gabbott et al. (2016) Proc. R. Soc. B. 283: 20161151

This past summer, a new piece of research was published which upended our assumptions. A detailed electron microscope and spectral analysis of fossilized Mayomyzon (the hagfish ancestor) has indicated the presence of pigment-bearing organelles called melanosomes, which are themselves indicative of a retina. Previously, these melanosomes, which appear in the fossil as dark spots, had been interpreted as either microbes or a decay-resistant material such as cartilage.

This new finding suggests that the simple eyes of living hagfish are not a trait passed down unchanged through the ages, but the result of degeneration over time, perhaps due to their no longer being needed for survival (much like the sense of smell in primates). What’s more, science has now lost its anchor point for the beginning of vertebrate-type eyes. If an organism with pigmented cells and a retina existed 530 million years ago, then these structures must have begun to develop significantly earlier, although until a fossil is discovered that shows an intermediate stage between Mayomyzon and primitive invertebrate eyes, we can only speculate as to how much earlier.

This discovery is intriguing because it shows how new evidence can sometimes remove some of those already-placed building blocks of knowledge, and how something as apparently minor as tiny dark spots on a fossil can cause us to have to reevaluate long-held assumptions.

Sources

  • Gabbott et al. (2016) Proc. R. Soc. B. 283: 20161151
  • Lamb et al. (2007) Nature Rev. Neuroscience 8: 960-975

*The image at the top of the page is of Pacific hagfish at 150 m depth, California, Cordell Bank National Marine Sanctuary, taken and placed in the public domain by Linda Snook.

Sex & the Reign of the Red Queen

Why sexual species beat clones every time.

Tenniel_red_queen_with_alice

“Now, here, you see, it takes all the running you can do to keep in the same place.”

From a simple reproductive perspective, males are not a good investment. With apologies to my Y chromosome-bearing readers, let me explain. Consider for a moment a population of clones. Let’s go with lizards, since this actually occurs in lizards. So we have our population of lizard clones. They are all female, and are all able to reproduce, leading to twice the potential for creating more individuals as we see in a species that reproduces sexually, in which only 50% of the members can bear young. Males require all the same resources to survive to maturity, but cannot directly produce young. From this viewpoint alone, the population of clones should out-compete a bunch of sexually-reproducing lizards every time. Greater growth potential. What’s more, the clonal lizards can better exploit a well-adapted set of genes (a “genotype”); if one of them is well-suited to survive in its environment, they all are.

Now consider a parasite that preys upon our hypothetical lizards. The parasites themselves have different genotypes, and a given parasite genotype can attack certain host (i.e. lizard) genotypes, like keys that fit certain locks. Over time, they will evolve to be able to attack the most common host genotype, because that results in their best chance of survival. If there’s an abundance of host type A, but not much B or C, then more A-type parasites will succeed in reproducing, and over time, there will be more A-type parasites overall. This is called a selection pressure, in favour of A-type parasites. In a population of clones, however, there is only one genotype, and once the parasites have evolved to specialise in attacking it, the clones have met their match. They are all equally vulnerable.

The sexual species, however, presents a moving target. This is where males become absolutely worth the resources it takes to create and maintain their existence (See? No hard feelings). Each time a sexual species mates, its genes are shuffled and recombined in novel ways. There are both common and rare genotypes in a sexual population. The parasite population will evolve to be able to attack the most common genotype, as they do with the clones, but in this case, it will be a far smaller portion of the total host population. And as soon as that particular genotype starts to die off and become less common, a new genotype, once rare (and now highly successful due to its current resistance to parasites), will fill the vacuum and become the new ‘most common’ genotype. And so on, over generations and generations.

Both species, parasite and host, must constantly evolve simply to maintain the status quo. This is where the Red Queen hypothesis gets its name: in Wonderland, the Red Queen tells Alice, “here, you see, it takes all the running you can do to keep in the same place.” For many years, evolution was thought of as a journey with an endpoint: species would evolve until they were optimally adapted to their environment, and then stay that way until the environment changed in some fashion. If this was the case, however, we would expect that a given species would be less likely to go extinct the longer it had existed, because it would be better and better adapted over time. And yet, the evidence didn’t seem to bear this prediction out. The probability of extinction seemed to stay the same regardless of the species’ age. We now know that this is because the primary driver of evolution isn’t the environment, but competition between species. And that’s a game you can lose at any time.

1280px-Passiflora_in_Canary_Islands
Passionflower. Photo by Yone Moreno on Wikimedia Commons.

Now the parasite attacking the lizards was just a (very plausible) hypothetical scenario, but there are many interesting cases of the Red Queen at work in nature. And it’s not all subtly shifting genotypes, either; sometimes it’s a full on arms race. Behold the passionflower. In the time of the dinosaurs, passionflowers developed a mutually beneficial pollinator relationship with longwing butterflies. The flowers got pollinated, the butterflies got nectar. But then, over time, the butterflies began to lay their eggs on the vines’ leaves. Once the eggs hatched, the young would devour the leaves, leaving the plant much the worse for wear. In response, the passionflowers evolved to produce cyanide in their leaves, poisoning the butterfly larvae. The butterflies then turned the situation to their advantage by evolving the ability to not only eat the poisonous leaves, but to sequester the cyanide in their bodies and use it to themselves become poisonous to their predators, such as birds. The plants’ next strategy was to mimic the butterflies’ eggs. Longwing butterflies will not lay their eggs on a leaf which is already holding eggs, so the passionflowers evolved nectar glands of the same size and shape as a butterfly egg. After aeons of this back and forth, the butterflies are currently laying their eggs on the tendrils of the passionflower vines rather than the leaves, and we might expect that passionflowers will next develop tendrils which appear to have butterfly eggs on them. These sorts of endless, millennia-spanning arms races are common in nature. Check out my article on cuckoos for a much more murderous example.

IMG_2933
Egg-like glands at the base of the passionflower leaf (the white dots on my index finger).

Had the passionflowers in this example been a clonal species, they wouldn’t likely have stood a chance. Innovations such as higher-than-average levels of cyanide or slightly more bulbous nectar glands upon which defences can be built come from uncommon genotypes. Uncommon genotypes produced by the shuffling of genes that occurs in every generation in sexual species.

And that, kids, is why sex is such as fantastic innovation. (Right?) Every time an illness goes through your workplace, and everybody seems to get it but you, you’ve probably got the Red Queen (and your uncommon genotype) to thank.

 

Sources

  • Brockhurst et al. (2014) Proc. R. Soc. B 281: 20141382.
  • Lively (2010) Journal of Heredity 101 (supple.): S13-S20 [See this paper for a very interesting full explanation of this links between the Red Queen hypothesis and the story by Lewis Carroll.]
  • Vanderplank, John. “Passion Flowers, 2nd Ed.” Cambridge: MIT Press, 1996.

*The illustration at the top of the page is by Sir John Tenniel for Lewis Carroll’s “Through the Looking Glass,” and is now in the public domain.