Thermal Expansion & Old Glass, a Cautionary Tale

The story of my first and most lasting lesson on Why We Follow The Rules In The Lab

Advertisements

I recently saw a Twitter thread where people who had worked in scientific labs were sharing the most egregious lab safety violations and/or injuries they’d witnessed. It was eye-opening to see the variety of things that can go wrong, even when completing routine tasks. Research is often so intensely repetitive, that it’s easy to believe that just because you’ve done something without incident a thousand times, nothing will go wrong on the thousand and first. But I’ve learned first hand that it can, and not necessarily because the researcher did anything wrong. 

So at the risk of looking very stupid, today I’d like to share with you the story of my first and most lasting lesson on Why We Follow The Rules In The Lab. Specifically, the one about wearing closed-toed shoes, because I’ve seen it disregarded so many times since this happened. In my defence, I was in my very early twenties when this happened, and was therefore still immortal in my own estimation. What’s more, my job at that time, working in a microscopy lab, was primarily to scan pre-made plant tissue slides under the microscope and take photographs to be used in lectures. I wasn’t messing around with a lot of hazardous chemicals or anything. But I was in charge of seeing that some experimental plants got watered, and because we were working with mycorrhizal fungi, that also meant sterilizing the water so I wasn’t introducing a lot of fungal competition to the pots. 

It was a Friday afternoon in June, so obviously my mind was elsewhere. The lab was hitting the campus bar for happy hour shortly, and I just wanted to get done, so I popped down to the scary basement autoclave room to grab the water that had finished sterilizing. (As an aside, I’ve worked in a number of different research facilities, and they nearly all had the exact same scary basement autoclave room. It’s like the furnace from Home Alone.) This autoclave, which was probably older than I was, consisted of a several foot high cylinder that you had to lower things into. My water was waiting for me in the biggest Erlenmeyer flask I’ve ever seen. I think it held four or five litres and it looked decades older than even the autoclave. It was huge. It wasn’t actively boiling any longer, but it was still very hot, so I gloved up and lifted it out of the autoclave. This being summer, and me being young and stupid, I had a lab coat and thermal gloves, but was only wearing sandals on my feet. 

As I lifted the flask out, I needed to readjust my grip in those big gloves, so I let the edge of the flask rest gently on a bit of the outer surface of the autoclave, which was metal, and cold relative to the flask. Well, the ancient glass must have finally had enough rounds of heating and cooling, because it abruptly shattered, sending five litres of near-boiling water down on my mostly bare feet. It was extraordinarily painful. I spent happy hour sitting in the lab with ice packs on my feet and an unhappy lab manager looking over me, filling out paperwork. The lines of my sandals stood out in negative relief against the red blistery-ness of my feet for a good month, like a terrible sunburn. I was lucky it was just water and not some hazardous chemical or something like agar that would actually have stuck to my feet and made the burns worse. I heard about it for the rest of my time in that lab. 

So that’s my story of flaunting lab safety regs and paying the price. If there are any STEM undergrads/grad students reading, let my mistake be a lesson.

Cover your toes, wear your PPE.

Did you do something foolish in the lab and live to tell the tale? Leave a comment so we can feel foolish together.

When did scientific research become a paying gig?

Not so long ago, being paid to do science made you look bad.

Hey folks! Long time no see! I’m back from ‘new baby’ land and looking forward to getting some new and interesting posts up here. As some of you may know, I’ve transitioned from scientific research to freelance science writing. That means that most of my writing time goes to trying to earn my living. But I love this blog and getting to share thoughts and information on biology and evolution with all of you too much to want to let it go. So what I’m going to try is a more informal approach. Until now, I’ve written each blog post as though it were an entire article… longish, with a full story and some references, following a set format. Doing that takes a lot of time, and I haven’t been able to make it work with the circus that is life as a freelancer with small children. From now on, I’ll be trying out some shorter, more conversational posts with less of a rigid format. And I’ll be discussing a wider range of topics around science, its history, and its place in the world, rather than strictly stories about biology. I always welcome comments, so please let me know what you think of the new format and/or what you’d like to see in this space. Thanks for reading!

***

I’m currently working on a personal project related to Victorian-era natural history (botany & zoology), so it’s my plan to share what I’m learning and reading here over the next little while. As an armchair enthusiast of Victorian history, I’m always taken aback by just how much of what we’ve come to consider the norm in today’s culture has its roots in the 19th century. One fascinating example of this is our modern concept of the scientist. As of the early and even mid-19th century, scientific research wasn’t a ‘profession’, per se. That is, it wasn’t a job you went in to expecting to earn a living. At that time, it was largely undertaken by men with independent fortunes, who used their personal money to finance their work. These ‘gentlemen scientists’ as we now refer to them (even the term ‘scientist’ wasn’t in use then) were the primary driving force in research. Even those with teaching positions in universities typically received only a small honorarium for their work. As a matter of fact, it was considered less than respectable at the time to do science for money, which barred anyone without independent wealth from making a career of it. This didn’t change until the second half of the century, when these social taboos began to relax and government-funded science (which is the norm even today) became more common. During the transition period between these two social views and systems of conducting research, men who wished to go into science, but who needed to earn an income, were anxious about the potential damage to their reputation of being seen to study science for money, and were careful to cultivate their image as gentlemen to as not to be tainted by their income. It’s ironic that in recent years, many university teaching positions have become so poorly paid that those in them need to have an independent source of income to survive in that job. A return to the early Victorian mode of self-funded science isn’t something we should be aspiring to today.

A Q&A with The Atlantic’s Ed Yong

The author of a New York Times bestseller on microbes talks to us about science writing and blogging.

Ed-Yong-1

[This post originally appeared on Science Borealis.]

Following his recent keynote address at the Canadian Society of Microbiology conference in Waterloo, Ontario, my Science Borealis colleague, Robert Gooding Townsend and I chatted with Ed Yong, author of the New York Times bestseller, I Contain Multitudes, about getting started in science communication, using humour in your writing, and whether science blogging is dead, among other topics. Here is that conversation, edited for brevity and clarity.

RGT: You were a judge at BAHfest [a science comedy event], and you use humour in your work all the time. I’m wondering, how do you see humour as fitting into the work that you do?

EY: It’s never an explicit goal. I’m not sitting here thinking to myself, “I need to be funny”, but I think I’ve always taken quite an irreverent approach to most of the things I do, science writing included. I think that sometimes there is a view that science writing should be austere because science is a little bit like that, and I think that’s ridiculous. It benefits from a light touch, just as anything else does, and that’s sort of the approach that I very naturally gravitate towards, and not taking things too seriously.

EZ: Your own blog has now ended, and notably, the SciLogs network has been shut down over the last year. I was wondering if you think that the heyday of science blogging is over now and do you feel that newsletters like yours may be the next big thing for sci-comm people, despite being somewhat less interactive than blogs?

EY: You know, there’s been a lot said about the death of blogs. I don’t necessarily think that’s true. I think that in the goodbye post to my blog I noted that I thought that blogs had just shifted into a new guise. The Atlantic was always a pioneer in fusing that kind of bloggy voice with the traditional rigours of journalism, and even back when I was blogging, before I joined the magazine, much of what I was seeing online had much the same depth of personality and voice that the best blogging I read had. So, I think those worlds have always been colliding, and I think that collision is now well underway, and a lot of what made blogs special has been absorbed into mainstream organizations. […] I think the spirit of blogging is very much still alive, and as for newsletters, I don’t see these things as the same. They’re different media. I started my newsletter just as a way of telling people about the work I was doing elsewhere. You know, the newsletter is not a blog; I’m not producing any original content for it, I’m just using it as a signpost so people can find my other work.

EZ: When you say that the voice that you see in mainstream media has become more like blogging, do you feel that it’s gotten a little less formal and more conversational?

EY: I don’t mean that it has across the board, but I think there are definitely publications like the Atlantic that have fused those things together. We have space for a lightness of touch in the way we write, provided that we’re still upholding the strictest journalistic ethics and standards.

RGT: One of the things you do very well in your work is that you maintain a high standard of journalistic integrity. That’s incredibly important, but I also wanted to probe how you face these challenges in science journalism. For example, there’s the question of how much are you trying to do good in your reporting, and how political an action is that? How do journalistic and scientific impartiality differ, and do you get caught in between?

EY: I think as journalists, we are not crusaders. We are definitely not advocates. The objective of my work is not to celebrate science, it’s not to get people on board with science, whatever that might mean. I’m not calling for more science funding or anything of the kind. What I am doing is telling people what is going on. I think obviously, all journalists have their own biases and their own starting points from which they approach the world. But I think we’re not chasing some artificial standard of neutrality. I think what instead is more important is being fair. […] If I’m going to write a piece slagging off one particular approach to science or debunking a lab’s work, I’m going to reach out to those people for their comments as well. I think that very much is just part of what we do. I don’t see any conflict there.

EZ: You’ve talked in the past about the importance to your career of having won the Daily Telegraph‘s Young Science Writer prize in 2007. I was wondering, now that both that contest and the Guardian Wellcome Trust are no more, do you have any advice for a beginning science writer to get their writing out there and build a reputation?

EY: That prize was important to me, but if you talk to a wide variety of science writers, you’ll see that there’s not any one route for getting into the field. […] If you do a Google search for “On the Origin of Science Writers“, you’ll find a page on my now-defunct blog which collects stories from different science writers about how they got into the business. It’s a useful resource, and I think one thing you’ll notice when you read those stories is that there is no single route that everyone takes. It’s all very, very diverse. So I think the critical thing with the competition was that it forced me to write and prove myself. Now, there are different ways you can do that, but the really important thing is that, if you’re interested in making a start in this business, you need to actually start producing things. You need to start writing. My advice to people who say that they’re aspiring science writers is that there really is no such thing – you’re either currently writing about science, right now, in which case you are a science writer, or you are not, in which case you’re not.

RGT: In your recent keynote address at the University of Waterloo, you talked about the representation that you strive for in your stories, in terms of the gender balance and racial background of your sources. This is an important thing in terms of who we see when we see people in science. One thing that you haven’t talked about, at least there, is how this has affected you personally, as a person of colour. Are you comfortable saying anything about how that may have affected your progression in science journalism?

EY: I’m not sure I have anything massively useful to say. I wouldn’t say I’ve experienced obvious discrimination on the basis of that, and I’m not sure it’s affected my career in any particular way. It is something you bear in mind. When I go to journalism conferences and sci-comm meetings, these are still largely white spaces, and that does make a difference in terms of how you feel as a part of that community. In Britain, there is another science writer called Kevin Fong, who’s amazing and does a lot of radio and TV work. Kevin and I have this running joke between us, because we’ll turn up at events, and people will come up to me, and say congratulations, for the thing that Kevin did, and vice versa, because the idea of TWO East Asian science writers working in the same spaces is just clearly too much for some people to comprehend, so you know, there is that, and it’s not fun, and it happens often enough to be annoying. It’s a very mild example of the kinds of problems that can happen when you have a lack of diversity in a field, and that’s certainly the case with science journalism as much as it is for science itself.

RGT: You’ve talked about finding stories about topics that most people find very boring, and turning them into this very interesting and engaging story. If that’s a power that you have, does it come with responsibilities? For example, if you think that climate change is an overwhelming threat, then distracting from that is detrimental. Where do your thoughts lie with that?

EY: I actually really don’t buy this argument at all. I think there’s no department of ranking all the world’s problems in order and then dealing with them one at a time. That’s not how people think about problems. That’s not how people react to the world around them. If we worked in that way, then, for example, I might never write about anything other than, say, the rise of fascism or climate change, or antibiotic-resistant bacteria, or what have you. [I don’t think we write or read about things] because they are necessarily important or they pose existential threats. I think we write about things because they are interesting to us.

EZ: Thank you very much for your time today.

EY: Thanks, guys. Pleasure talking to you.

What’s in a Name?

Part Two: How’s Your Latin?

295738_obamadon_f
The awesomely named Obamadon gracilis.  Image: Reuters

What do Barack Obama, Marco Polo, and the band Green Day have in common? They all have at least one organism named after them. Obama has several, including a bird called Nystalus obamai and an extinct reptile named Obamadon gracilis. Green Day’s honorary organism is the plant Macrocarpaea dies-viridis, “dies-viridis” being Latin for “green day.” Many scientists also have species named after them, usually as recognition for their contributions to a field. My own PhD advisor, Dr. Anne Bruneau, has a genus of legumes, Annea, named after her for her work in legume systematics.

Nashi_pear
“Pear-leaved Pear”   Photo via Wikimedia Commons

Scientific names, which are colloquially called Latin names, but which often draw from Greek as well, consist of two parts: the genus, and the specific epithet. The two parts together are called the species. Though many well-known scientists, celebrities, and other note-worthies do have species named after them, most specific epithets are descriptive of some element of the organism or its life cycle. Many of these are useful descriptions, such as the (not so bald) bald eagle, whose scientific name is the more accurate Haliaeetus leucocephalus, which translates to “white-headed sea eagle.” (See here for some more interesting examples.) A few are just botanists being hilariously lazy with names, as in the case of Pyrus pyrifolia, the Asian pear, whose name translates as “pear-leaved pear.” So we know that this pear tree has leaves like those of pear trees. Great.

In contrast to common names, discussed in our last post, Latin names are much less changeable over time, and do not have local variants. Soybeans are known to scientists as Glycine max all over the world, and this provides a common understanding for researchers who do not speak the same language. Latin is a good base language for scientific description because it’s a dead language, and so its usage and meanings don’t shift over time the way living languages do. Until recently, all new plant species had to be officially described in Latin in order to be recognized. Increasingly now, though, descriptions in only English are being accepted. Whether this is a good idea remains to be seen, since English usage may shift enough over the years to make today’s descriptions inaccurate in a few centuries’ time.

This isn’t to say that scientific names don’t change at all. Because scientific names are based in organisms’ evolutionary relationships to one another (with very closely related species sharing a genus, for example), if our understanding of those relationships changes, the name must change, too. Sometimes, this causes controversy. The most contentious such case in the botanical world has been the recent splitting of the genus Acacia.

acacia
The tree formerly known as Acacia. Via: Swahili Modern

Acacia is/was a large genus of legumes found primarily in Africa and Australia (discussed previously on this blog for their cool symbiosis with ants). In Africa, where the genus was first created and described, the tree is iconic. The image of the short, flat-topped tree against a savanna sunset, perhaps accompanied by the silhouette of a giraffe or elephant, is a visual shorthand for southern Africa in the popular imagination, and has been used in many tourism campaigns. The vast majority of species in the genus, however, are found in Australia, where they are known as wattles. When it became apparent that these sub-groups needed to be split into two different genera, one or the other was going to have to give up the name. A motion was put forth at the International Botanical Congress (IBC) in Vienna in 2005 to have the Australian species retain the name Acacia, because fewer total species would have to be renamed that way. Many African botanists and those with a stake in the acacias of Africa objected. After all, African acacias were the original acacias. The motion was passed, however, then challenged and upheld again at the next IBC in Melbourne in 2011. (As a PhD student in legume biology at the time, I recall people having firm and passionate opinions on this subject, which was a regular topic of debate at conferences.) It is possible it will come up again at this year’s IBC in China. Failing a major turnaround, though, the 80 or so African acacias are now known as Vachellia, while the over one thousand species of Australian acacias continue to be known as Acacia.

The point of this story is, though Latin names may seem unchanging and of little importance other than a means of cataloguing species, they are sometimes both a topic of lively debate and an adaptable reflection of our scientific understanding of the world.

Do you have a favourite weird or interesting Latin species name? Make a comment and let me know!

What’s in a Name?

Part One: Common vs. Scientific Names

img_3146-staghornsumac

When I was a kid growing up on a farm in southwestern Ontario, sumac seemed to be everywhere, with its long, spindly stems, big, spreading compound leaves, and fuzzy red berries. I always found the plant beautiful, and had heard that First Nations people used the berries in a refreshing drink that tastes like lemonade (which is true… here’s a simple recipe). But often, we kids were warned by adults that this was “poison sumac,” not to be touched because it would give us itchy, burning rashes, like poison ivy did. In fact, plenty of people would cut down any nascent stands to prevent this menace from spreading. We were taught to fear the stuff.

 

Toxicodendron_vernix#1978a#2_400
THIS is the stuff you need to look out for. Via The Digital Atlas of the Virginia Flora

It was many years later before I learned that the red-berried sumacs I grew up with were not only harmless, but were also not closely related to the poisonous plant being referred to, which, as it turns out, has white berries and quite different leaves. Scientifically speaking, our innocent shrub is Rhus typhina, the staghorn sumac, while the rash-inducing plant is called Toxicodendron vernix. Not even in the same genus. Cautious parents were simply being confused by the similarity of the common names.

 

This story illustrates one of the ironies of common names for plants (and animals). Though they’re the way nearly everyone thinks of and discusses species, they’re without a doubt the most likely to confuse. Unlike scientific (Latin) names, which each describe a single species and are, for the most part, unchanging, a single common name can describe more than one species, can fall in and out of use over time, and may only be used locally. Also important to note is that Latin names are based on the taxonomy, or relatedness, of the species, while common names are usually based on either appearance, usage, or history.

 

This isn’t to say that common names aren’t valuable. Because common names describe what a plant looks like or how it is used, they can convey pertinent information. The common names of plants are also sometimes an important link to the culture that originally discovered and used the species, as in North America, where native plants all have names in the local languages of First Nations people. It seems to me, although I have no hard evidence to back it up, that these original names are now more often being used to form the Latin name of newly described species, giving a nod to the people who named it first, or from whose territory it came.

 

One high profile case of this in the animal world is Tiktaalik roseae, an extinct creature which is thought to be a transitional form (“missing link”) between fish and tetrapods. The fossil was discovered on Ellesmere Island in the Canadian territory of Nunavut, and the local Inuktitut word “tiktaalik”, which refers to a type of fish, was chosen to honour its origin.

 

But back to plants… Unlike staghorn sumac and poison sumac, which are at least in the same family of plants (albeit not closely related within that family), sometimes very distinct species of plants can end up with the same common name through various quirks of history. Take black pepper and bell or chili peppers. Black pepper comes from the genus Piper, and is native to India, while hot and sweet peppers are part of the genus Capsicum. Botanically, the two are quite distantly related. So why do they have the same name? Black pepper, which bore the name first, has been in use since ancient times and was once very highly valued. The confusion came about, it would seem, when Columbus visited the New World and, finding a fruit which could be dried, crushed, and added to food to give it a sharp spiciness, referred to it as “pepper” as well.

Sa-pepper
A black peppercorn. Easy to confuse with a chili pepper, I guess? Via: Wikimedia Commons

 

Another interesting, historically-based case is that of corn and maize. In English-speaking North America, corn refers to a single plant, Zea mays. In Britain and some other parts of the Commonwealth, however, “corn” is used to indicate whatever grain is primarily eaten in a given locale. Thus, Zea mays was referred to as “Indian corn” because it was consumed by native North Americans. Over time, this got shortened to just “corn”, and became synonymous with only one species. Outside of Canada and the United States, the plant is referred to as maize, which is based on the original indigenous word for the plant. In fact, in scientific circles, the plant tends to be called maize even here in North America, to be more exact and avoid confusion.

 

1024px-Spanish_moss_at_the_Mcbryde_Garden_in_hawaii
Not Spanish, not a moss. Via: Wikimedia Commons

And finally, for complete misinformation caused by a common name, you can’t beat Spanish moss. That wonderful gothic stuff you see draped over trees in the American South? That is neither Spanish, nor a moss. It is Tillandsia usneoides, a member of the Bromeliaceae, or pineapple family, and is native only to the New World.

 

And that wraps up my very brief roundup of confusing common names and why they should be approached with caution. In part two, I’ll discuss Latin names, how they work, and why they aren’t always stable and unchanging, either.

 

There are SO many more interesting and baffling common names out there. If you know of a good one, let me know in the comments!

 

*Header image via the University of Guelph Arboretum

Forever Young

How Evolution Made Baby-faced Humans & Adorable Dogs

human_development_neoteny_body_and_head_proportions_pedomorphy_maturation_aging_growth

Who among us hasn’t looked at the big round eyes of a child or a puppy gazing up at us and wished that they’d always stay young and cute like that? You might be surprised to know that this wish has already been partially granted. Both you as an adult and your full-grown dog are examples of what’s referred to in developmental biology as paedomorphosis (“pee-doh-mor-fo-sis”), or the retention of juvenile traits into adulthood. Compared to closely related and ancestral species, both humans and dogs look a bit like overgrown children. There are a number of interesting reasons this can happen. Let’s start with dogs.

When dogs were domesticated, humans began to breed them with an eye to minimizing the aggression that naturally exists in wolves. Dogs that retained the puppy-like quality of being unaggressive and playful were preferentially bred. This caused certain other traits associated with juvenile wolves to appear, including shorter snouts, wider heads, bigger eyes, floppy ears, and tail wagging. (For anyone who’s interested in a technical explanation of how traits can be linked like this, here’s a primer on linkage disequilibrium from Discover. It’s a slightly tricky, but very interesting concept.) All of these are seen in young wolves, but disappear as the animal matures. Domesticated dogs, however, will retain these characteristics throughout their lives. What began as a mere by-product of wanting non-aggressive dogs has now been reinforced for its own sake, however. We love dogs that look cute and puppy-like, and are now breeding for that very trait, which can cause it to be carried to extremes, as in breeds such as the Cavalier King Charles spaniel, leading to breed-wide health problems.

cavalier-king-charles-spaniel-blenheim-dog-tag-id-image-nm
An undeniably cute Cavalier King Charles spaniel, bred for your enjoyment. (Via Wikimedia Commons)

Foxes, another type of wild dog, have been experimentally domesticated by scientists interested in the genetics of domestication. Here, too, as the foxes are bred over numerous generations to be friendlier and less aggressive, individuals with floppy ears and wagging tails – traits not usually seen in adult foxes – are beginning to appear.

But I mentioned this happening in humans, too, didn’t I? Well, similarly to how dogs resemble juvenile versions of their closest wild relative, humans bear a certain resemblance to juvenile chimpanzees. Like young apes, we possess flat faces with small jaws, sparse body hair, and relatively short arms. Scientists aren’t entirely sure what caused paedomorphosis in humans, but there are a couple of interesting theories. One is that, because our brains are best able to learn new skills prior to maturity (you can’t teach an old ape new tricks, I guess), delayed maturity, and the suite of traits that come with it, allowed greater learning and was therefore favoured by evolution. Another possibility has to do with the fact that juvenile traits – the same ones that make babies seem so cute and cuddly – have been shown to elicit more helping behaviour from others. So the more subtly “baby-like” a person looks, the more help and altruistic behaviour they’re likely to get from those around them. Since this kind of help can contribute to survival, it became selected for.

chimpanzee-mastiff-dog-friends-10
You and your dog, essentially. (Via The Chive)

Of course, dogs and humans aren’t the only animals to exhibit paedomorphosis. In nature, the phenomenon is usually linked to the availability of food or other resources. Interestingly, both abundance and scarcity can be the cause. Aphids, for example, are a small insect that sucks sap out of plants as a food source. Under competitive conditions in which food is scarce, the insects possess wings and are able to travel in search of new food sources. When food is abundant, however, travel is unnecessary and wingless young are produced which grow into adulthood still resembling juveniles. Paedomorphosis is here induced by abundant food. Conversely, in some salamanders, it is brought on by a lack of food. Northwestern salamanders are typically amphibious as juveniles and terrestrial as adults, having lost their gills. In high elevations where the climate is cooler and a meal is harder to come by, many of these salamanders remain amphibious, keeping their gills throughout their lives because aquatic environments represent a greater chance for survival. In one salamander species, the axolotl (which we’ve discussed on this blog before), metamorphosis has been lost completely, leaving them fully aquatic and looking more like weird leggy fish than true salamanders.

axolotl_ganz
An axolotl living the young life. (Via Wikimedia Commons)

So paedomorphosis, this strange phenomenon of retaining juvenile traits into adulthood, can be induced by a variety of factors, but it’s a nice demonstration of the plasticity of developmental programs in living creatures. Maturation isn’t always a simple trip from point A to point B in a set amount of time. There are many, many genes at play, and if nature can tweak some of them for a better outcome, evolution will ensure that the change sticks around.

Sources

*Header image by: Ephert – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=39752841

Redesigning Life

John Parrington’s new book sets the stage for an informed debate on genetic modification

redesigning-life_cover

This post originally appeared on Science Borealis

“Imagine if living things were as easy to modify as a computer Word file.” So begins John Parrington’s journey through the recent history and present-day pursuits of genetic modification in Redesigning Life. Beginning with its roots in conventional breeding and working right up to the cutting edge fields of optogenetics, gene editing, and synthetic biology, the book is accessible to those with some undergraduate-level genetics, or secondary school biology and a keen interest in the subject. This audience will be well served by a book whose stated goal is to educate the public so that a proper debate can take place over the acceptable uses of genome editing.

 

Parrington doesn’t shy away from the various ethical concerns inherent in this field. While he points out, for example, that many fears surrounding transgenic foods are the result of sensational media coverage, he also discusses the very real concerns relating to issues such as multinational companies asserting intellectual property rights over living organisms, and the potential problems of antibiotic resistance genes used in genetically modified organisms (GMOs). Conversely, he discusses the lives that have been improved with inventions such as vitamin A-enriched “golden rice”, which has saved many children from blindness and death due to vitamin deficiencies, and dairy cattle that have been engineered to lack horns, so they can be spared the excruciating process of having their horn buds burned off with a hot iron as calves. These are compelling examples of genetic modification doing good in the world.

 

This is Parrington’s approach throughout the book: both the positive and negative potential consequences of emerging technologies are discussed. Particular attention is paid to the pain and suffering of the many genetically modified animals used as test subjects and models for disease. This cost is weighed against the fact that life-saving research could not go ahead without these sacrifices. No conclusions are drawn, and Parrington’s sprawling final chapter, devoted solely to ethics, is meandering and unfocussed, perhaps reflecting the myriad quagmires to be negotiated.

 

Weaving in entertaining and surprising stories of the scientists involved, Parrington frequently brings the story back to a human level and avoids getting too bogged down in technical details. We learn that Gregor Mendel, of pea-breeding fame, originally worked with mice, until a bishop chastised him for not only encouraging rodent sex but watching it. Mendel later commented that it was lucky that the bishop “did not understand that plants also had sex!” We’re told that Antonie van Leeuwenhoek, known as the father of microscopy, was fond of using himself as a test subject. At one point, he tied a piece of stocking containing one male and two female lice to his leg and left it for 25 days to measure their reproductive capacity. Somewhat horrifyingly, he determined that two breeding females could produce 10,000 young in the space of eight weeks.

 

The applications of the fast moving, emerging technologies covered in Redesigning Life will astound even those with some familiarity with modern genetics. The new field of optogenetics, for example, uses light-sensitive proteins such as opsins to trigger changes in genetically modified neurons in the brain when light is shone upon them. In a useful, yet nevertheless disturbing proof-of-concept experiment, scientists created mind-controlled mice, which, at the flick of a switch, can be made to “run in circles, like a remote-controlled toy.” More recently, sound waves and magnetic fields have been used to trigger these reactions less invasively. This technique shows potential for the treatment of depression and epilepsy.

 

The book goes into some detail about CRISPR/CAS9 gene editing, a process that has the potential to transform genetic modification practices. This system is efficient, precise, broadly applicable to a range of cell types and organisms, and shortens the research timeline considerably compared to traditional methods of creating GMOs. It underpins most of the other technologies discussed in the book, and its applications seem to be expanding daily. In the words of one of its developers, Jennifer Doudna, “Most of the public does not appreciate what is coming.” These words could be applied to almost any technology discussed in this book. Already within reach are so-called “gene drive” technologies, which could render populations of malaria-bearing mosquitos – or any other troublesome species – sterile, potentially driving them to extinction, albeit with unknown ancillary consequences. Researchers have also developed a synthetic genetic code known as XNA, which sports two new nucleotides and can code for up to 172 amino acids, as opposed to the usual 20. Modifying organisms to contain XNA opens up the possibility of creating proteins with entirely novel functions, as well as the tantalizing prospect of plants and animals that are entirely immune to all current viruses, due to the viruses’ inability to hijack a foreign genetic code for their own uses.

 

While the book touches on agriculture, its main preoccupation is medical research. Despite many of the therapies covered being far from ready for use in humans, one can’t help but feel that a revolution in the treatment of diseases, both infectious and genetic, is at hand. Only a year ago, gene editing was used to cure a baby girl of leukemia by engineering her immune system to recognize and attack her own cancerous cells. In the lab, the health of mice with single gene disorders such as Huntington’s disease and Duchenne muscular dystrophy is being significantly improved. Writing in 1962 in his book The Genetic Code, Isaac Asimov speculated that someday “the precise points of deficiency in various inherited diseases and in the disorders of the cell’s chemical machinery may be spotted along the chromosome.” Some 54 years later, we have the technology not only to spot these points but to fix them as precisely as a typo in a manuscript.