Eunice Foote and the greenhouse effect

In the last few months, a lot of us have been confined to our homes. We no longer commute daily to our workplace, spend less time stuck in traffic, and have canceled our travel plans. With fewer cars on the road, airplanes in the sky, and shut down of some industrial activities, global CO2-emissions are likely to have decreased. In fact, a recent paper has estimated the emission reductions based on predictive models and reported on a daily globar CO2-emissions decrease by ~17% by April 2020 compared with mean 2019 levels.

The same paper predicts that the total average emission of 2020 will decrease somewhere between 4% and 7% compared to the 2019 average depending on the duration of confinement.

It is agreed upon by most of the scientific community, that changes in the amount of CO2 in the atmosphere have an effect on global temperature, this fact will likely not surprise you. But perhaps it will surprise you that this has been known for a long time.

Not just for a few decades. But for two centuries.*

Climate science in the 19th century, yes, it was a thing.

In the early 19th century, scientists had a suspicion that the earth’s atmosphere had the ability to keep the planet warm by transmitting visible light but absorbing infrared light (or heat), and that human activity could change the atmosphere’s temperature, including Joseph Fourier, who mentioned “the progress of human societies” having the potential to – in the course of many centuries – change the “average degree of heat” in an 1827 paper.

In 1859, Fourier’s theoretical musings were turned into experiments, when John Tyndall, an Irish physicist, published his study investigating the absorption of infrared in different gases. This was the first** experiment showing how heat absorption by the atmosphere could lead to temperature rises, and that certain gasses absorb more heat than others, such as water vapor, methane, and CO2.

John Tyndall’s experimental setup.

Three years earlier…

But wait! Three years before Tyndall’s paper, another paper had appeared in the American Journal for Science and Arts: Circumstances affecting the Heat of the Sun’s Rays, showing how the sun’s rays interacted with different gases, concluding that CO2 trapped the most heat compared to air and hydrogen. The paper was by a woman named Eunice Newton Foote.***

There are no known photos of Eugene Newton Foote, so here is a photo of her daughter, Mary Foote Henderson, instead.

Now, years after her experiments and findings, Foote is credited to be the first scientist to have experimented on the warming effect of the sun’s light on the earth’s atmosphere and the first to theorize that changing levels of CO2 would change the global temperature. In her paper, she stated that:

“An atmosphere of that gas would give to our earth a high temperature; and if, as some suppose, at one period of its history, the air had mixed with it a larger proportion than at present, an increased temperature from its own action, as well as from increased weight, must have necessarily resulted.”

Foote (1819-1888) was a farmer’s daughter and lived in a time where women were typically not considered scientists. She did not have a sophisticated laboratory, so her experimental setup was rather amateurish compared to Tyndall’s a few years later. When her results were presented at the American Association for the Advancement of Science conference, it was not by her, but by Professor Joseph Henry of the Smithsonian.

Eunice Foote’s experiment for her studies on greenhouse gases, as recreated in the 2018 short film “Eunice.”
Eunice Foote’s experiment for her studies on greenhouse gases, as recreated in the 2018 short film “Eunice.” Credit: Paul Bancilhon and Matteo Marcolini

While she gained some recognition for her work at the time, it was rather limited and forgotten by history. Henry presented her work at the conference, prefacing the talk with: “Science was of no country and of no sex. The sphere of woman embraces not only the beautiful and the useful, but the true.” and she was praised in September 1856 issue of Scientific American titled “Scientific Ladies.”

It wasn’t until 2010, however, when her paper was rediscovered by a retired petroleum geologist, that her name was slowly put back on the climate science map.

Three strikes, and you’re out!

According to John Perlin, who wrote a book about Foote:

“She had three strikes against her. She was female. She was an amateur. And she was an American.”

There weren’t very many female scientists at the time. Women had a hard time getting formal (science) education.

She did not have a traditional science education and her experimental setup was nowhere near the sophistication of Tyndall’s. her experiment was a lot more simple than Tyndall’s and was limited in its results: she was not able to distinguish between visible and infrared radiation. But her serendipitous discovery that CO2 traps more heat than the other gases she tested, and her hypothesis about changing atmospheric CO2 affecting global temperature, were the first of their kind.

Finally, Europe was still the epicenter of scientific discovery at the time. The US, and physics in the US, was still very much up and coming. At the same time, communicating discoveries overseas without glass fibers and internet was just not as trivial as it is today.

For many decades, John Tyndall was considered the father of climate science, and granted, he was the first to show that certain gases absorbed more heat radiation (rather than radiation in general) than other gases. But Foote was the mother, first theorizing what we now know to be true: changing levels of atmospheric CO2 result in changes in global temperature. And now, almost two centuries later, she’s remembered for it.

So while you’re working from home and putting less CO2 in the atmosphere as a result, spare a little thought for woman scientist who first linked CO2 with temperature. And that the fact that she did, is pretty amazing.


I highly recommend this Cogito video on the history of Climate Change:


* ± a decade or two. But who’s counting?

** spoiler: it was not.

*** This Foote-note is just for the pun.

Happy New Decade?

We’re two days into 2020. Just a few days ago, you couldn’t go online without seeing some recap of the last decade, or go anywhere without people jokingly saying “see you next year! Oh, next decade!”

And there I’d be, suppressing the urge to go “Well, actually…” and point out that because our calendar started in the year 1 A.D., wouldn’t the next decade start on January 1st, 2021? A year from now?

Why does the year even start on Jan 1st?

The first assumption is that everyone uses the same calendar: the Gregorian calendar.

In October 1582, Pope Gregory XIII introduced the Gregorian calendar as an update to the Julian calendar. Both are solar calendars, i.e. it starts counting when the sun moves through a fixed point, and a year would last ~365 days. This is different from a lunar calendar – based on the cycles of the moon in which a month (or moonth?) would be 28 days – that would not nicely sync up with the seasons.

In the Gregorian calendar, the astronomical cycle of the earth around the sun, which is 365.2425 days long, is taken into account by skipping a leap year every 100 years. Sort of; this approximation has an error of one day every 3,030 years, or 26 seconds a year, even with the skipping leap years every 100 year but not on the 400s*.

For most things, most of the world has adopted the Gregorian calendar for their daily life somewhere between 1582 and the early 20th century, even if cultural and religious calendars were kept in parallel.

If you think too much about it, months seem completely arbitrary – except maybe solstices landing on sort of the same date – and other systems, like the Equal Month Calendar which has 13 28-day months plus an extra day or two depending on leap years – sounds more plausible.

But for all intents and purposes, the whole world has agreed that the year starts on January 1st, based on the ideas of a Medieval Pope. And when decades start would depend on Christianity too.

The year 1 A.D.

Our current calendar starts counting from 1 A.D. (or Anno Domini), with the year 1 the year Jesus was allegedly born.

Allegedly, because it wasn’t until 525 A.D. that the year was set by Dionysius Exiguus when he was devising his method to calculate Easter. Historians believe that Jesus was actually born at least a few years earlier, and not necessarily on Christmas day. In any case, 1 A.D. has now generally been adopted as “the start of counting of years” and sometimes referred to as 1 C.E. (common era) to avoid religious connotations.

But the most interesting thing about 1 A.D. – for me at least – is that there is no year 0.

The Roman numeral system had no concept of zero, and it wasn’t until the eighth century that the Arabic Numeral for zero was introduced in Europe – and eventually used widely in the seventeenth century.

If that’s the case, did we really just start a new decade?

Counting from zero

A decade is simply “a span of 10 years,” so new decades are constantly starting. We don’t celebrate them typically, except for the decades of our lives, and those that are generally considered in the calendar.

In the 20th century, we started referring to decades as groups of years having the same digits: the years 1990-1999 are referred to as the nineties (dixit nineties kid) as opposed to counting from 1 to 10 (in which case the nineties would have been from 1991-2000). You would think this is the more “mathematically correct” way of counting, but even in programming, there are systems that start counting from 0 just as a convention.

Every 10 years, and definitely at the start of the new millennium, the same discussion occurs: when do we start counting a decade/century/millennium?

A poll from a month ago shows that most Americans (64%) saw the new decade start yesterday, while about a fifth (19%) were not sure. Only 17% answered the new decade will start on January 1, 2021.

In my opinion, it doesn’t really matter. Having a new decade start on a year ending with a 0 looks nicer, and in the 21st century means we can make fun novelty glasses where the lenses fit in the zeros. I’ll continue to be pedantic and say that the decade doesn’t start until next year if only so I can forget about it and not do that “looking back on a decade” thing, ever.

In any case, whether you think the year started yesterday, or the decade, or if it was just a regular old day, have a marvelous 2020!

Image
From Strange Planet by Nathan W. Pyle

*The rule is that every year that is divisible by four is a leap year, except for years that are divisible by 100, with the exception on that exception for years that are divisible by 400.

Sources: in-text links

Apollo 11

50 years ago, on this very day, the Apollo 11 was launched. 4 days later, it would be the first manned landing on the moon.

You can watch the whole launch here (with the actual take off around 3:27:10):

[CBS]

In the meantime, I spent (some of) the afternoon shooting water rockets with a group of campers. It was marvellous.

post

150 years of elements

Here’s a riddle for you: what hangs in every chemistry class in middle and high school, leads to the creation of several nerdy t-shirts, and celebrated is 150th birthday yesterday?

Okay, it’s not a very funny riddle. Nor is it a very difficult one. The answer is: the periodic table of elements, first published on the 6th of March 1869 – exactly 150 years minus-one-day ago – by the Russian chemist Dmitri Mendeleev.

Professor scheikunde Dmitri Mendelejev in zijn bureau aan de universiteit van Sint-Petersburg.

Professor Dmitri Mendeleev

From Alchemy to Chemistry

In the olden days, we would have turned to alchemists to ask our questions about fundamental elements and what stuff makes up stuff. Even though alchemy was not really a “science” in the pure sense of the word – it relied heavily on spiritualism, philosophy and even magic – it set the stage for what would later become chemistry. And while alchemists were mostly trying to turn random metallic rocks into gold, or brew an elixir for eternal life, they were the first that attempted to identify and organize the different substances occurring in nature. The Elements.

The earliest basic elements were considered to be earth, water, air, and fire. The discovery of what we might call “chemical elements” really kicked off in 1669 in Germany, by a merchant by the name of Henning Brand. Like many chemists-avant-la-lettre (alchemists), he was trying to discover the Philosopher’s stone. However, like many muggles, he was not acquainted with Nicolas Flamel and did not succeed (Side note: Nicolas Flamel was actually based on a real person!). Instead, while distilling urine – as you would while trying to create eternal life – he discovered a glow-in-the-dark substance: phosphorous. And with that, the element finding had begun.

Chemistry can be considered to have originated in 1789, when Antoine-Laurent de Lavoiser wrote what is said to be the first modern chemistry textbook. In this book, he defined an element as a substance that can not be broken down into a simpler substance. A fundamental particle. This definition lasted until the discovery of subatomic particles (electrons, protons, and neutrons) in the 1930s. Lavoisier’s list of elements included things like oxygen, hydrogen, and mercury, but also light.

Dalton’s symbols of the elements. 1806 (Wellcome Images). Combine numbers 24 and 28 with some gin and get a nice cocktail.

Let’s glaze over most of the 19th century, where multiple different scientists realized that the atomic weights of elements were multiples of that of hydrogen (William Prout) and how there was a certain periodicity in terms of physical and chemical properties when the elements were arranged according to their atomic weights (Alexandre-Emile Béguyer de Chancourtois). The early attempts to classify the elements were based on this periodicity, and eventually, our Mendeleev came along.

“Chemical Solitaire”

The Russian chemist Dmitri Mendeleev is the father of the modern periodic table. In fact, in Belgium, we call the periodic table of elements “Mendeleev’s table of elements”. After (allegedly) playing “chemical solitaire” on long train journeys – quite common in Russia, I’m sure – he came up with a classification method based on arranging the elements by atomic mass and classifying them according to their properties. Elements in one group (column) have the same number of valance electrons: the number of electrons in the outer shell of the atom and available to react with other elements. Elements in the same column therefore from bonds with other elements in the same way, and form similar types of materials.

Because there were some gaps in the table – some atomic weights missing – he predicted the existence of elements that were yet to be discovered, and what their chemical properties would be. And this is what made his classification method so ground-breaking.

And indeed, in 1885 germanium was discovered, with properties – as predicted – similar to silicon. Same for gallium in 1875 (similar properties as aluminum) and scandium in 1879 (similar properties as boron), filling up some gaps in his periodic table.

The periodic table from Osnovy khimīi by Dmitry Ivanovich Mendeleyev, 1834-1907. S.-Peterburg : Tip. t-va “Obshchestvenna︠i︡a polʹza, 1869-1871.

The gaps are filled

Since 1869, the gaps in the periodic table have been filled, and new elements are discovered or created every few years adding to the high end of the table. The last update to the periodic table was in 2016, when the elements nihonium (113), moscovium (115), tennessene (117) and oganesson (118) were added to the list.

So today – okay, yesterday – we celebrated 150 years of chemical element classification, the anniversary of the periodic table of elements, and the collective pain of decades of highschoolers memorizing atomic masses and the number of valance electrons.

The table of elements as usually taught today, with pictures of applications for each element (from https://elements.wlonk.com/index.htm).

Inspired by the coverage by vrt nws.

The interactive table of elements in the cover picture is from
http://periodictable.com/

post

In the spotlight: Juana Inés de la Cruz

Since I first gained the use of reason my inclination toward learning has been so violent and strong that neither the scoldings of other people … nor my own reflections … have been able to stop me from following this natural impulse that God gave me. He alone must know why; and He knows too that I have begged him to take away the light of my understanding, leaving only enough for me to keep His law, for anything else is excessive in a woman, according to some people. And others say it is even harmful. 

I read this quote in Contact by Carl Sagan. It was written by Juana Inés de la Cruz in her Reply to the Bishop of Puebla in 1691. The Bishop had attacked her scholarly work as being inappropriate for a woman; while he claimed to agree with her views, he didn’t think them appropriate for her sex. Rather than writing, she should devote her life to prayer, an endeavor much more suitable for a woman.

Aargh. 17th century clergymen are just the worst.

We’d better talk about this amazing woman then:

Juana Inés de la Cruz lived in (what was then New Spain but what is now) Mexico in the 17th century and was a self-taught polyglot (my favorite type of inspirational people). She studied scientific thought and philosophy, she was a composer and a poet, all in an age long before women were allowed to do anything involving using their brain. 

If we may believe the stories, Juana started teaching herself at a young age by hiding to read her grandfather’s books. She supposedly learned how to read and write Latin at the age of three. At the age of 16 she had asked her mother’s permission to disguise herself as a man so she could study in some avant la lettre version of She’s the Man; but her mom wouldn’t let her so she had to continue to study in secret. But by then, she already knew Latin, Greek, Nahuatl, and accounting – the most important language of them all *ahem*.

In 1669, she became a Hieronymite nun so she could study in freedom – other monasteries were a lot more strict and wouldn’t allow her to pursue her passion for knowledge, philosophy, and writing. As a nun, she would write on the topics of religion, feminism, and love; often criticizing the hypocrisy of men and defending women’s right to education. In Reply to Sister Philotea, she wrote:

Oh, how much harm would be avoided in our country [if women were able to teach women in order to avoid the danger of male teachers in intimate setting with young female students.]

[Such hazards] would be eliminated if there were older women of learning, as Saint Paul desires, and instructions were passed down from one group to another, as in the case with needlework and other traditional activities.

Okay, now I’m imagining needlework maps of our universe. That’d be cool.

Unfortunately, this story has an unhappy ending. According to some sources, rather than being censored by the church, Sister Juana decided to stop writing and sell all her books, musical instruments, and scientific tools. Other sources claim that her belongings were confiscated by the bishop due to her defiance towards the church. Nevertheless, a lot of her writings have gone lost and soon later, in 1695, after caring for other nuns with the plague. 

There’s a lot of humdrum about inspirational women, in science or not, nowadays. With a lot of books with inspiring stories, such as Rachel Ignotofsky’s Woman in Science, finding empowering role models has never been easier. And I love it. Showing an as diverse possible range of inspirational historical figures provides everyone with role models than can identify with and aspire to. However, I have noticed that my knowledge of inspirational women is primarily European-based. So this is me trying to change that.  

A painting of Juana Ines de la Cruz
“One can perfectly well philosophize while cooking supper.” – Multitasking 101

________________________________________________________

Soucre: ye old trustworthy

Polymath (πολυμαθής)

Sometimes I feel like I was born in the wrong era.

Usually, this feeling is music-related. Now that I have renewed access to my dad’s old record collection (and a record player, #Hipster), I can’t help but feel that rock music from the ’70s and ’80s surpasses anything being made now. Comparing music from the “olden days” to music now is of course not entirely fair; what still remains has already withstood the test of time, current music hasn’t had to (yet).

Music aside, my wrong-time-feeling also applies to how I feel about science and research. Nowadays, scientific discoveries seem to always be the result of hard work of an entire team of scientists for countless years. There is so much knowledge and information out there, it seems imperative to find one’s own little niche and specialise, specialise, specialise. It is impossible to be a master of all.

However, I long for the golden old days of the polymaths and the homines universalis when academics were interested in all fields. They were allowed, or even required, to branch out, study all sciences, not to mention humanities, linguistics and arts. I’m speaking of people like Galileo Galilei and Leonardo Da Vinci. My favourite person, D’Arcy Thompson, would also be considered a polymath.

A polymath is defined as someone with “knowledge of various matters, drawn from all kinds of studies ranging freely through all the fields of the disciplines, as far as the human mind, with unwearied industry, is able to pursue them” (1). I noticed while perusing the wikipedeia page, that the examples given of Renaissance Men are indeed all men. Even if I was born in the right era to be an homo universalis, I would still have been born the wrong gender.

However, there are at least a few examples of female polymaths, and I wanted to introduce you to one of them: Dorothy Wrinch. Just in case you wanted a more nuanced example.
dorothy_maud_wrinch
Dorothy Maud Wrinch (12 September 1894 – 11 February 1976)
Dorothy Wrinch was a mathematician by training but also showed interest in physics, biochemistry and philosophy. She is someone who – even though I’ve only recently heard of her – is an excellent example of the homo universalis I wish I could be. She was also a friend of D’Arcy Thompson, though if I remember correctly, they mostly upheld a written correspondence.

In any case, Dorothy is known for her mathematical approaches to explaining biological structures, such as DNA and proteins. Most notably, she proposed a mathematical model for protein structure that – albeit later disproved – set the stage for biomathematical approaches to structural biology, and mathematical interpretations of X-ray crystallography.
She was a founding member of the Theoretical Biology Club, a group of scientists who believed that an interdisciplinary approach of philosophy, mathematics, physics, chemistry and biology, could lead to the understanding and investigation of living organisms.

She is described as “a brilliant and controversial figure who played a part in the beginnings of much of present research in molecular biology.  (…) I like to think of her as she was when I first knew her, gay, enthusiastic and adventurous, courageous in face of much misfortune and very kind.” (2)
Actually, come to think of it, maybe Dorothy was born in the wrong era. Nowadays, using mathematical approaches to protein structure is practically commonplace. Though I’m not quite sure how well philosophy would fit in.

Anyway, I still feel that interdisciplinary research, and having broad interests, is not the easiest path to go down. But as long as we have inspirational people to look up to, past and present, we know it is worth a try.

(Wow, went way overboard with the #Inspirational stuff towards the end there.)


(1) As defined by Wower, from Wikipedia.
(2) Dorothy Crowfoot Hodgkin (Wrinch’s obituary in 1976).
An updated version of this post was published on the Marie Curie Alumni Association blog on March 19, 2019.

Final thoughts (100 years, part VII)

To end my series of posts on the man and the book (D’Arcy Thomspon and On Growth and Form respectively, the latter a book with over 1000 pages), I wanted to share a few more quotes from and about him that I found interesting enough to type out:

“In his figure and bearded face there was majestic presence; in is hospitality there were openness, kindness and joviality; in his ever quick wit were the homely, the sophisticated and, at times, the salty… in status he became a very doyen among professors the world over; in his enquiring mind he was like those of whose toungue and temper he was a master, the Athenians of old, eager ‘to tell or hear some new thing'” – Professor Peacock (1)

  1. With the name Professor Peacock, I can’t help but imagine a flamboyant, multicolour-labcoat-wearing, frizzle-haired man…
  2. I hope the meaning of the word salty has changed over time…

There is a certain fascination in such ignorance; and we learn without discouragement that Science is “plutot destine a etudier qu’a connaitre, a chercher qu’a trouve la verite.” (2)
(Rather than destined to study for knowledge, (we are) searching to find the truth.)

#IgnoranceIsBliss?

In my opinion the teaching of mechanics will still have to begin with Newtonian force, just as optics begins in the sensation of colour and thermodynamics with the sensation of warmth, despite the fact that a more precise basis is substituted later on. (3)

As a self-proclaimed science communicator, it is often difficult to judge how much to simplify things. On the other hand, making things relatable to everyday experiences does not necessarily mean telling untruths. Classical physics may not be valid for every single situation, but it is often enough to describe what is happening without needing to resort to more complicated relative physics. And you don’t have to start quoting wavelengths when a colour description would do just as well. Fill in the details later, if necessary.


Some quotes on evolution and natural selection:

And we then, I think, draw near to the conclusion that what is true of these is universally true, and that the great function of natural selection is not to originate, but to remove. (4)

Unless indeed we use the term Natural Selection in a sense so wide as to deprive it of any purely biological significance; and so recognise as a sort of natural selection whatsoever nexus of causes suffices to differentiate between the likely and the unlikely, the scarce and the frequent, the easy and the hard: and leads accordingly, under the peculiar conditions, limitations and restraints which we call “ordinary circumstances,” one type of crystal, one form of cloud, one chemical compound, to be of frequent occurrence and another to be rare. (5)


We can move matter, that is all we can do to it. (6)

On a fundamental level, are we really able to build things? Aren’t we just rearranging the building blocks?

I know that in the study of material things, number, order and position are the threefold clue to exact knowledge; that these three, in mathematician’s hands, furnish the “first outlines for a sketch of the universe“, that by square and circle we are helped, like Emile Verhaeren’s carpenter, to conceive “Les lois indubitable et fecondes qui sont la regle et la clarte du monde.” (7)
(The unquestionable and fruitful laws that rule and clarify the world.)

For the harmony of the world is made manifest in Form and Number, and the heart and soul and all the poetry of Natural Philosophy are embodied in the concept of mathematical beauty. (8)

Delight in beauty is one of the pleasures of the imagination … (9)

#MathIsLife. Thank you, D’Arcy, for the 1000+ pages of mind-expanding, educational and philosophical topics.

Picture of D'Arcy Thompson and his pet parrot


Sources:

(1) D’Arcy Thompson and his zoology museum in Dundee – booklet by Matthew Jarron and Cathy Caudwell, 2015 reprint
(2) On Growth and Form – p. 19
(3) Max Planck
(4) On Growth and Form – p. 269-270
(5) On Growth and Form – p. 849
(6) Oliver Lodge
(7) On Growth and Form – p. 1096
(8) On Growth and Form – p. 1096-1097
(9) On Growth and Form – p. 959
(2, 4-6, 8-9) from D’Arcy Thompson, On Growth and Form,  Cambridge university press, 1992 (unaltered from 1942 edition)

When size matters (100 years, Part VI)

Neat process diagrams of metabolism always gave the impression of some orderly molecular conveyer belt, but the truth was, life was powered by nothing at the deepest level but a sequence of chance collisions. (1)

Zoom down far enough (but not too far – or the Aladdin merchant might complain) and all matter is just a soup of interacting molecules. Chance encounters and interactions, but with a high enough probability to happen. In essence, life is a series of molecular interactions (that, in turn, are atomic interactions and so on and so on…)

The form of the cellular framework of plants and also of animals depends, in its essential features, upon the forces of molecular physics. (2)

Quite often, we can ignore those small-scale phenomena, but only as long as the system we are describing is large enough. As in physics, in biological systems size does matter (*insert ambiguous joke here*). We have to adapt the governing physical rules depending on the scale that we are observing. Do we consider every quantum-biological detail, can we use a cell as the smallest entity or even use whole organisms as the smallest functional entity?

Life has a range of magnitude narrow indeed compared to with which physical science deals; but it is wide enough to include three such discrepant conditions as those in which a man, an insect and a bacillus have their being and play their several roles. Man is ruled by gravitation, and rests on mother earth. A water-beetle finds the surface of a pool a matter of life and death, a perilous entanglement or an indispensable support. In a third world, where the bacillus lives, gravitation is forgotten, and the viscosity of the liquid, the resistance defined by Stoke’s law, the molecular shocks of the Brownian movement, doubtless also the electric charges of the ionised medium, make up the physical environment and have their potent and immediate influence on the organism. (3)

Observing life at the smallest scales (by which I mean cells and unicellular organisms) at least has the advantage the rules driving form and structure can, at least in many cases, be considered relatively simple: surface-tension.

In either case, we shall find a great tendency in small organisms to assume either the spherical form or other simple forms related to ordinary inanimate surface-tension phenomena, which forms do not recur in the external morphology of large animals. (4)

While on the topic of size, as many things in the universe: size is relative. I have noticed in conversations with colleagues and supervisors that what is considered small or large, definitely depends on the point of perspective (and often: whatever the size is that that person typically studies). I could assume that for a zoologist, a mouse is a small animal, but tell a microscopist they have to image an area of 1 mm² and the task seems monstrous. For a particle physicist, a micrometre is immense, but for an astrophysicist, the sun is actually quite close.

We are accustomed to think of magnitude as a purely relative matter. We call a thing big or little with reference with what it is wont to be, as when we speak of a small elephant of a large rat; and we are apt accordingly to suppose that size makes no other or more essential difference. (5)
Undoubtedly philosophers are in the right when they tell us that nothing is great and little otherwise than by comparison. (6)
There is no absolute scale of size in the Universe, for it is boundless towards the great and also boundless towards the small. (5)

That’s the amazing thing about science: we strive to understand the universe on all scales. The universe is mindblowing in its size, in both directions on the length scale.

We distinguish, and can never help distinguishing, between the things which are at our own scale and order, to which our minds are accustomed and our senses attuned, and those remote phenomena which ordinary standards fail to measure, in regions where there is no habitable city for the mind of man. (7)

Good thing we have scientists, amazing minds, capable of studying, visualising and even starting to understand the universe on all its scales…

Ms48534_13

My mind might be boggled, but here’s a man that looks like his mind contains the universe. (D’Arcy in his 80s)


(1) Permutation city – Greg Egan, p. 67
(2) Wildeman
(3) On Growth and Form – p. 77
(4) On Growth and Form – p. 57
(5) Gulliver
(6) On Growth and Form – p. 24
(7) On Growth and Form – p. 21
(3-4, 6-7) from D’Arcy Thompson, On Growth and Form,  Cambridge university press, 1992 (unaltered from 1942 edition)

Let’s get physical (100 years, Part V)

[…] of the construction and growth and working of the body, as of all else that is of the earth earthy, physical science is, in my humble opinion, our only teacher and guide. (1)

You might have seen the xkdc comic ranking different scientific disciplines by their purity (and if you haven’t, it’s just a bit of scrolling away). The idea it portrays is that all sciences are basically applied physics (which is in turn applied mathematics). In other words: if you go deep enough to a subject, you eventually end up explaining in with principles from physics. And this is the same principle D’Arcy explores in his book. That has over 1000 pages, did you know that?

xkdc comic on scientific fields arranged by purity, with Mathematics considered the "Most pure"
A famous D’Arcy quote states that the study of numerical and structural parameters are the key to understanding the Universe:

I know that the study of material things number, order and position are the threefold clue to exact knowledge, and that these three, in the mathematician’s hands, furnish the ‘first outlines for a sketch of the Universe.’ (2)

You can ask the average high school student about mathematics, and the usual response would probably be something in the lines of: “Ugh, I’ll never use this for anything.” Sometimes, it might be difficult to see the every-day use of mathematics, or even the not-so-everyday use. But in reality, the possibilities are endless (given that we are open to having long lists of endless equations that need a supercomputer to solve – probably).

We are apt to think of mathematical definitions as too strict and rigid for common use, but their rigour is combined with all but endless freedom. The precise definition of an ellipse introduces us to all the ellipses in the world; the definition of a ‘conic section’ enlarges our concept, and a ‘curve of higher order’ all the more extends our range of freedom.

It might not be straightforward to see how mathematics (or physics for that matter) would help a biologist in the understanding of natural processes. However, there are a few examples of how physical properties, forces or phenomena are used in biology, such as helping bone repair:

The soles of our boots wear thin, but the soles of our feet grow thick the more we walk upon them: for it would seem that the living cells are “stimulated” by pressure, or by what we call “exercise,” to increase and multiply. The surgeon knows, when he bandages a broken limb, that his bandage is doing something more than merely keeping the part together: and that the even, constant pressure which he skilfully applies is a direct encouragement of the growth and an active agent in the process of repair. (4)

Nowadays the link between physics and biology is more accepted that a century ago, leading to new research fields such as biomechanics, mechanobiology and “physics of cancer”. I have eluded to some of the links between cancer and physics in previous posts (Physics of Cancer, Part I and II). Mathematical models are commonly used to better understand biological processes, including signalling pathways, tissue formation and growth and changes occurring in cancer.

This goes to show (again) that “interdisciplinary” is not just a fancy buzzword, it is a core principle of scientific research. While I must admit from own experience that carrying out interdisciplinary research might not be the easiest path, the potential discoveries and applications are even more endless. And while it might seem mind-boggling, I would argue that mind-bogglement is a good thing, stretching the potential of our minds and our understanding of the universe. And as far as I can read, D’Arcy agrees:

… if you dream, as some of you, I doubt not, have a right to dream, of future discoveries and inventions, let me tell you that the fertile field of discovery lies for the most part on those borderlands where one science meets another. There is a cry in the land for specialisation … but depend on it, that the specialist who is not reinforced by a breadth of knowledge beyond his own speciality is apt very soon to find himself only the highly trained assistant to some other man … Try also to understand that though the sciences are defined from one another in books, there runs through them all what philosophers used to call the commune vinculum, a golden interweaving link, to their mutual support and interpretation. (5)

So I guess my point is (if there even was a point in this post, apart from that the book has like over 1000 pages, in case you didn’t know): if you are a biologist, don’t be afraid to break some sweat and get physical. And the opposite goes for physicists. You might want to get a bit chemical as well, while you’re at it.

The Homo Universalis is back!

ogf-fig-237

Featured image: math and shells.


(1) On Growth and Form, p 13.

(2) On Growth and Form, p. 1096
(3) On Growth and Form, p. 1027
(4) On Growth and Form, p. 985
(5) D’Arcy Thompson and his zoology museum in Dundee – booklet by Matthew Jarron and Cathy Caudwell, 2015 reprint
(1-4) from D’Arcy Thompson, On Growth and Form,  Cambridge university press, 1992 (unaltered from 1942 edition)

If only it were so simple (100 years, part IV)

Ever since I have been enquiring into the works of Nature I have always loved and admired the Simplicity of her Ways. (1)

In his book (yes, it’s about that again), D’Arcy supports his ideas through examples, through observations on biological systems that he can either explain through mathematical equations or directly compare to purely physical phenomena such as bubble formation. You might think that these are grave simplifications.

However, even in biology, which some people might call a “complex science”, simplifications are often used. Using cell culture rather than tissue. Isolating a single player in a pathway to see what its effect is. And quite often, a simplification holds true within the limits that have been set up to define it.

As was pointed out to me recently, the definition of “complex” is that something is “composed of many interconnected parts”. Meaning that this is not necessarily the antonym to “simple”. But “complex” is often seen to mean the same thing as “difficult”, even if that’s not necessarily the definition. In any case, it is definitely not so that physics is a “simple science”:

But even the ordinary laws of the physical forces are by no means simple and plain. (2)

It makes sense to break down a complex system into its individual components and analyse these, perhaps more simple concepts, separately. There is great value in simplifying things. First of all, there is a certain beauty in simplicity:

Very great and wonderful things are done by means of a mechanism (whether natural or artificial) of extreme simplicity. A pool of water, by virtue of its surface, is an admirable mechanism for the making of waves; with a lump of ice in it, it becomes an efficient and self-contained mechanism for the making of currents. Music itself is made of simple things – a reed, a pipe, a string. The great cosmic mechanisms are stupendous in their simplicity; and, in point of fact, every great or little aggregate of heterogeneous matter involves, ipso facto, the essentials of a mechanism. (3)

When reading this paragraph, two things jumped out at me. Two weeks ago, I was at the annual meeting of the British Society for Cell Biology (joint with other associations) and heard an interesting talk by Manuel Théry. Part of his story relied on putting boundaries on a system. Without boundaries, whatever we would like to study just gets too complicated, and we are unable to understand what is happening. For example, when explaining how waves originate, it is much easier to use a system where water is confined in a box. We can then directly observe the wave patterns that start to occur and understand their interactions.

And then this: “Music itself is made of simple things – a reed, a pipe, a string. The great cosmic mechanisms are stupendous in their simplicity.” D’Arcy sure knew his way around words.

Simplifying also heavily increases our understanding of the principles of life, the universe and everything. When you think about it, it is used so often, you hardly even notice that certain simplifications have been made. D’Arcy points this out as well:

The stock-in-trade of mathematical physics, in all the subjects with which that science deals, is for the most part made up of simple, or simplified, cases of phenomena which in their actual and concrete manifestations are usual too complex for mathematical analysis; hence, even in physics, the full mechanical explanation is seldom if ever more than the “cadre idéal” towards which our never-finished picture extends. (4)

When considering biological systems, he states the following:

The fact that the germ-cell develops into a very complex structure is no absolute proof that the cell itself is structurally a very complicated mechanism: nor yet does it prove, though this is somewhat less obvious, that the forces at work or latent within it are especially numerous and complex. If we blow into a bowl of soapsuds and raised a great mass of many-hued and variously shaped bubbles, if we explode a rocket and watch the regular and beautiful configurations of its falling streamers, if we consider the wonders of a limestone cavern which a filtering stream has filled with stalactites, we soon perceive that in all these cases we have begun with an initial system of very slight complexity, whose structure in no way foreshadowed the result, and whose comparatively simple intrinsic forces only play their part by complex interaction with the equally simple forces of the surrounding medium. (5)

For many biological and non-biological systems, the initial conditions might not seem complex. It is by interactions between other – perhaps on their own relatively simple – environmental conditions, other simple systems, that it grows out to be complex. Obviously, as in the definition. But a complex system is more difficult to understand conceptually, more difficult to model. And that brings us the value of simplification, looking at smaller, simpler systems that more closely resemble the “cadre idéal”, allow us to pick apart the different players in a larger system. If we understand their individual behaviour, perhaps this can shed light on the collective behaviour.

As we analyse a thing into its parts or into its properties, we tend to magnify these, to exaggerate their apparent independence, and to hide from ourselves (at least for a time) the essential integrity and individuality of the composite whole. We divide the body into its organs, the skeleton into its bones, as in very much the same fashion we make a subjective analysis of the mind, according to the teachings of psychology, into component factors: but we know very well that the judgment and knowledge, courage or gentleness, love or fear, have no separate existence, but are somehow mere manifestations, or imaginary coefficients, of a most complex integral. (6)

As far as D’Arcy goes in his book, his simplifications hold true:

And so far as we have gone, and so far as we can discern, we see no sign of the guiding principles failing us, or of the simple laws ceasing to hold good. (7)

Of course, this does not automatically lead to complete understanding. We only get that tiny bit closer to seeing the bigger – and smaller – picture:

We learn and learn, but will never know all, about the smallest, humblest, thing. (8)

Because we must never forget that adding together those simplifications does not automatically lead to the answer to the complete problem (and I find this oddly poetic):

The biologist, as well as the philosopher, learns to recognise that the whole is not merely the sum of its parts. It is this, and much more than this. (9)

To end, D’Arcy also makes note of things beyond his comprehension:

It may be that all the laws of energy, and all the properties of matter, and all the chemistry of all the colloids are as powerless to explain the body as they are impotent to comprehend the soul. For my part, I think it is not so. (10)

ogf-fig-135

Contact surfaces between four cells, or bubbles. This has nothing to do with the soul. It does have to do with how we can often simplify cells to their “shells”, and for certain principles this approximation holds true.


Sources:

(1) Dr. George Martine, Medical essays and Observations, Edinburgh, 1747.

(2) On Growth and Form, p. 19
(3) On Growth and Form, p. 292
(4) On Growth and Form, p.  643-644
(5) On Growth and Form, p. 289
(6) On Growth and Form, p1018
(7) On Growth and Form, p. 644
(8) On Growth and Form, p. 19
(9) On Growth and Form, p1019
(10) On Growth and Form, p. 13
(2-10) from D’Arcy Thompson, On Growth and Form,  Cambridge university press, 1992 (unaltered from 1942 edition)