Researchers recently found a treasure of 125-year-old, unopened, beer bottles in a shipwreck off the Scottish coast. In those bottles, preserved thanks to the cold ocean water, was even more of a treasure: live yeast.
Beer Archeology
It was only recently that I learned about the field of “beer archeology,” after hearing a talk by the beer archeologistTravis Rupp. I was delighted to learn that there is a whole field dedicated to recreating ancient beers, as far back as ancient Greece and Rome, only using materials, ingredients, and methods that would have been available at the time. Or as close as still available.
Inspired by old techniques, Travis Rupp has developed a series of beers named “Ales of Antiquity.” There’s an ancient-Egypt-inspired beer and a Viking-inspired beer. Of course, there is also a Belgian-style beer as well!
If I’d been interested in beer at the same time I was interested in archeology (the latter was when I was about 8 years old), who knows where I would have (could have) ended up?
Yeast explorers
The yeast found in the Scotting shipwreck is only one of the many endeavors of brewers resurrecting old strains, which cannot only be used to brew historical beers but may have applications in cleaning up pollutions and in the perfume industry (though the article states that the smell of the beer was quite atrocious).
That doesn’t mean noone tried brewing a beer, of course they did! Scientists at Brewlab, a spin-out from University of Sunderland, isolated two types of yeast from the Wallachia shipwreck beer: Brettanomycas and Debaryomyces. With that, they brewed a 7.5% stout that, apparently, had some coffee and chocolate notes. Certain byproducts of the fermentation products create a distinct flavor that is specific to the yeasts used.
And resurrecting ancient yeasts can yield more interesting flavors, compared to the limited stains that are used by most modern brewers today. Maybe something to try in our next batch of homebrew? Time to go diving, I guess!
If you’re ever done any cell culture, whether in a biology course, during grad school, or in an industrial research setting, chances are you’ve worked with HeLa cells.
About a week ago, I started drafting this post after my supervisor mentioned “I could just use any cell to test [a new protocol on], like, even HeLa cells.” Then today, via the Instagram account @womenengineerd, I learned Henrietta Lacks was born exactly 100 years ago (+ 3 days). So, it feels even more important to highlight this story: what are HeLa cells, who was Henrietta Lacks, and why is this all so important?
The source of a cell
In 1951, a poor Black woman went to the Johns Hopkins Hospital with cervical cancer. Without asking for permission, the doctors took some of the tumor cells to study and made a remarkable discovery: these cells continued to grow and survive in culture. They were immortal.
Later that year, that woman died, but her cells lived on for decades, and will likely continue to live on for many more. That woman’s name was Henrietta Lacks, and the cells she provided are a staple in practically every cell biology lab: HeLa cells.
An immortal cell
Immortalized cells are incredibly useful for biological research. They can be taken from cancer biopsies (now with consent!) or created by inducing mutations in other cells, in both cases giving the cells the potential to live on forever.
Researchers can continue to grow them in culture, and use the for biological, biochemical, pharmaceutical, and biotechnological research. They are easy to work with, don’t really require any special attention because they just want to grow, grow, grow.
HeLa cells were the first cells that were immortalized, and have been used extensively ever since they were taken from Henrietta Lacks.
The legacy of Henrietta Lacks is immense. A search for “HeLa cells” on Google Scholar prompts 1,730,000 search results (not that this is an accurate estimate of actual research conducted with HeLa cells), and over 17,000 US patents use HeLa cells.
From my personal experience, it seems that HeLa cells are used everywhere, from undergrad cell biology labs to ground-breaking research in both academia and industry. It’s hard to say for sure how influential this one cell type has been, or how much money it has made the companies selling them.
The irony of immortality
But while HeLa cells have been one of the most important ingredients for modern biology, neither Henrietta Lacks nor Lacks’ family recieved any of the benefits. It was not until the 70s that her family was even informed that their relative’s cells were used in such a widespread way. Furthermore, HeLa cells were bringing in the big bucks, while her family had little money (ironically, some of them could not afford health insurance).
As I’ve stated, I’ve used HeLa cells. Cells that were extracted from a Black woman without her knowledge or her consent. Cells that have made companies millions, without any contribution to her family. Cells that have helped us understand basic biology and the function of genes and proteins in our body, that have helped develop new medicines and treatments for cancer, that have taught many of us the principles of cell culture, all without teaching us their origin story and problematic history.
What now?
I’m not saying we should no longer use certain cells, but we should be at least be aware of potential problematic histories. Johns Hopkins University has been working with the Lacks family to honour Henrietta’s legacy, since the 50s, standards of consent and research ethics have been established, and Henrietta’s story is more widely known thank to a book and a movie.
Nevertheless, as far as I can find, her descendants have not been compensated in any way. In a 2017 interview, her grandson Ron stated: “It’s not all about the money. My family has had no control of the family story, no control of Henrietta’s body, no control of Henrietta’s cells, which are still living and will make some more tomorrow.”
So, right after what would have been her 100th birthday, what can we do to give control back to her family?
In the last few months, a lot of us have been confined to our homes. We no longer commute daily to our workplace, spend less time stuck in traffic, and have canceled our travel plans. With fewer cars on the road, airplanes in the sky, and shut down of some industrial activities, global CO2-emissions are likely to have decreased. In fact, a recent paper has estimated the emission reductions based on predictive models and reported on a daily globar CO2-emissions decrease by ~17% by April 2020 compared with mean 2019 levels.
The same paper predicts that the total average emission of 2020 will decrease somewhere between 4% and 7% compared to the 2019 average depending on the duration of confinement.
It is agreed upon by most of the scientific community, that changes in the amount of CO2 in the atmosphere have an effect on global temperature, this fact will likely not surprise you. But perhaps it will surprise you that this has been known for a long time.
Not just for a few decades. But for two centuries.*
Climate science in the 19th century, yes, it was a thing.
In the early 19th century, scientists had a suspicion that the earth’s atmosphere had the ability to keep the planet warm by transmitting visible light but absorbing infrared light (or heat), and that human activity could change the atmosphere’s temperature, including Joseph Fourier, who mentioned “the progress of human societies” having the potential to – in the course of many centuries – change the “average degree of heat” in an 1827 paper.
In 1859, Fourier’s theoretical musings were turned into experiments, when John Tyndall, an Irish physicist, published his study investigating the absorption of infrared in different gases. This was the first** experiment showing how heat absorption by the atmosphere could lead to temperature rises, and that certain gasses absorb more heat than others, such as water vapor, methane, and CO2.
Three years earlier…
But wait! Three years before Tyndall’s paper, another paper had appeared in the American Journal for Science and Arts: Circumstances affecting the Heat of the Sun’s Rays, showing how the sun’s rays interacted with different gases, concluding that CO2 trapped the most heat compared to air and hydrogen. The paper was by a woman named Eunice Newton Foote.***
Now, years after her experiments and findings, Foote is credited to be the first scientist to have experimented on the warming effect of the sun’s light on the earth’s atmosphere and the first to theorize that changing levels of CO2 would change the global temperature. In her paper, she stated that:
“An atmosphere of that gas would give to our earth a high temperature; and if, as some suppose, at one period of its history, the air had mixed with it a larger proportion than at present, an increased temperature from its own action, as well as from increased weight, must have necessarily resulted.”
Foote (1819-1888) was a farmer’s daughter and lived in a time where women were typically not considered scientists. She did not have a sophisticated laboratory, so her experimental setup was rather amateurish compared to Tyndall’s a few years later. When her results were presented at the American Association for the Advancement of Science conference, it was not by her, but by Professor Joseph Henry of the Smithsonian.
While she gained some recognition for her work at the time, it was rather limited and forgotten by history. Henry presented her work at the conference, prefacing the talk with: “Science was of no country and of no sex. The sphere of woman embraces not only the beautiful and the useful, but the true.” and she was praised in September 1856 issue of Scientific American titled “Scientific Ladies.”
It wasn’t until 2010, however, when her paper was rediscovered by a retired petroleum geologist, that her name was slowly put back on the climate science map.
Three strikes, and you’re out!
According to John Perlin, who wrote a book about Foote:
“She had three strikes against her. She was female. She was an amateur. And she was an American.”
There weren’t very many female scientists at the time. Women had a hard time getting formal (science) education.
She did not have a traditional science education and her experimental setup was nowhere near the sophistication of Tyndall’s. her experiment was a lot more simple than Tyndall’s and was limited in its results: she was not able to distinguish between visible and infrared radiation. But her serendipitous discovery that CO2 traps more heat than the other gases she tested, and her hypothesis about changing atmospheric CO2 affecting global temperature, were the first of their kind.
Finally, Europe was still the epicenter of scientific discovery at the time. The US, and physics in the US, was still very much up and coming. At the same time, communicating discoveries overseas without glass fibers and internet was just not as trivial as it is today.
For many decades, John Tyndall was considered the father of climate science, and granted, he was the first to show that certain gases absorbed more heat radiation (rather than radiation in general) than other gases. But Foote was the mother, first theorizing what we now know to be true: changing levels of atmospheric CO2 result in changes in global temperature. And now, almost two centuries later, she’s remembered for it.
So while you’re working from home and putting less CO2 in the atmosphere as a result, spare a little thought for woman scientist who first linked CO2 with temperature. And that the fact that she did, is pretty amazing.
I highly recommend this Cogito video on the history of Climate Change:
We’re two days into 2020. Just a few days ago, you couldn’t go online without seeing some recap of the last decade, or go anywhere without people jokingly saying “see you next year! Oh, next decade!”
The first assumption is that everyone uses the same calendar: the Gregorian calendar.
In October 1582, Pope Gregory XIII introduced the Gregorian calendar as an update to the Julian calendar. Both are solar calendars, i.e. it starts counting when the sun moves through a fixed point, and a year would last ~365 days. This is different from a lunar calendar – based on the cycles of the moon in which a month (or moonth?) would be 28 days – that would not nicely sync up with the seasons.
In the Gregorian calendar, the astronomical cycle of the earth around the sun, which is 365.2425 days long, is taken into account by skipping a leap year every 100 years. Sort of; this approximation has an error of one day every 3,030 years, or 26 seconds a year, even with the skipping leap years every 100 year but not on the 400s*.
For most things, most of the world has adopted the Gregorian calendar for their daily life somewhere between 1582 and the early 20th century, even if cultural and religious calendars were kept in parallel.
If you think too much about it, months seem completely arbitrary – except maybe solstices landing on sort of the same date – and other systems, like the Equal Month Calendar which has 13 28-day months plus an extra day or two depending on leap years – sounds more plausible.
But for all intents and purposes, the whole world has agreed that the year starts on January 1st, based on the ideas of a Medieval Pope. And when decades start would depend on Christianity too.
The year 1 A.D.
Our current calendar starts counting from 1 A.D. (or Anno Domini), with the year 1 the year Jesus was allegedly born.
Allegedly, because it wasn’t until 525 A.D. that the year was set by Dionysius Exiguus when he was devising his method to calculate Easter. Historians believe that Jesus was actually born at least a few years earlier, and not necessarily on Christmas day. In any case, 1 A.D. has now generally been adopted as “the start of counting of years” and sometimes referred to as 1 C.E. (common era) to avoid religious connotations.
But the most interesting thing about 1 A.D. – for me at least – is that there is no year 0.
The Roman numeral system had no concept of zero, and it wasn’t until the eighth century that the Arabic Numeral for zero was introduced in Europe – and eventually used widely in the seventeenth century.
If that’s the case, did we really just start a new decade?
Counting from zero
A decade is simply “a span of 10 years,” so new decades are constantly starting. We don’t celebrate them typically, except for the decades of our lives, and those that are generally considered in the calendar.
In the 20th century, we started referring to decades as groups of years having the same digits: the years 1990-1999 are referred to as the nineties (dixit nineties kid) as opposed to counting from 1 to 10 (in which case the nineties would have been from 1991-2000). You would think this is the more “mathematically correct” way of counting, but even in programming, there are systems that start counting from 0 just as a convention.
Every 10 years, and definitely at the start of the new millennium, the same discussion occurs: when do we start counting a decade/century/millennium?
A poll from a month ago shows that most Americans (64%) saw the new decade start yesterday, while about a fifth (19%) were not sure. Only 17% answered the new decade will start on January 1, 2021.
In my opinion, it doesn’t really matter. Having a new decade start on a year ending with a 0 looks nicer, and in the 21st century means we can make fun novelty glasses where the lenses fit in the zeros. I’ll continue to be pedantic and say that the decade doesn’t start until next year if only so I can forget about it and not do that “looking back on a decade” thing, ever.
In any case, whether you think the year started yesterday, or the decade, or if it was just a regular old day, have a marvelous 2020!
*The rule is that every year that is divisible by four is a leap year, except for years that are divisible by 100, with the exception on that exception for years that are divisible by 400.
Here’s a riddle for you: what hangs in every chemistry class in middle and high school, leads to the creation of several nerdy t-shirts, and celebrated is 150th birthday yesterday?
Okay, it’s not a very funny riddle. Nor is it a very difficult one. The answer is: the periodic table of elements, first published on the 6th of March 1869 – exactly 150 years minus-one-day ago – by the Russian chemist Dmitri Mendeleev.
From Alchemy to Chemistry
In the olden days, we would have turned to alchemists to ask our questions about fundamental elements and what stuff makes up stuff. Even though alchemy was not really a “science” in the pure sense of the word – it relied heavily on spiritualism, philosophy and even magic – it set the stage for what would later become chemistry. And while alchemists were mostly trying to turn random metallic rocks into gold, or brew an elixir for eternal life, they were the first that attempted to identify and organize the different substances occurring in nature. The Elements.
The earliest basic elements were considered to be earth, water, air, and fire. The discovery of what we might call “chemical elements” really kicked off in 1669 in Germany, by a merchant by the name of Henning Brand. Like many chemists-avant-la-lettre (alchemists), he was trying to discover the Philosopher’s stone. However, like many muggles, he was not acquainted with Nicolas Flamel and did not succeed (Side note: Nicolas Flamel was actually based on a real person!). Instead, while distilling urine – as you would while trying to create eternal life – he discovered a glow-in-the-dark substance: phosphorous. And with that, the element finding had begun.
Chemistry can be considered to have originated in 1789, when Antoine-Laurent de Lavoiser wrote what is said to be the first modern chemistry textbook. In this book, he defined an element as a substance that can not be broken down into a simpler substance. A fundamental particle. This definition lasted until the discovery of subatomic particles (electrons, protons, and neutrons) in the 1930s. Lavoisier’s list of elements included things like oxygen, hydrogen, and mercury, but also light.
Let’s glaze over most of the 19th century, where multiple different scientists realized that the atomic weights of elements were multiples of that of hydrogen (William Prout) and how there was a certain periodicity in terms of physical and chemical properties when the elements were arranged according to their atomic weights (Alexandre-Emile Béguyer de Chancourtois). The early attempts to classify the elements were based on this periodicity, and eventually, our Mendeleev came along.
“Chemical Solitaire”
The Russian chemist Dmitri Mendeleev is the father of the modern periodic table. In fact, in Belgium, we call the periodic table of elements “Mendeleev’s table of elements”. After (allegedly) playing “chemical solitaire” on long train journeys – quite common in Russia, I’m sure – he came up with a classification method based on arranging the elements by atomic mass and classifying them according to their properties. Elements in one group (column) have the same number of valance electrons: the number of electrons in the outer shell of the atom and available to react with other elements. Elements in the same column therefore from bonds with other elements in the same way, and form similar types of materials.
Because there were some gaps in the table – some atomic weights missing – he predicted the existence of elements that were yet to be discovered, and what their chemical properties would be. And this is what made his classification method so ground-breaking.
And indeed, in 1885 germanium was discovered, with properties – as predicted – similar to silicon. Same for gallium in 1875 (similar properties as aluminum) and scandium in 1879 (similar properties as boron), filling up some gaps in his periodic table.
The gaps are filled
Since 1869, the gaps in the periodic table have been filled, and new elements are discovered or created every few years adding to the high end of the table. The last update to the periodic table was in 2016, when the elements nihonium (113), moscovium (115), tennessene (117) and oganesson (118) were added to the list.
So today – okay, yesterday – we celebrated 150 years of chemical element classification, the anniversary of the periodic table of elements, and the collective pain of decades of highschoolers memorizing atomic masses and the number of valance electrons.
Since I first gained the use of reason my inclination toward learning has been so violent and strong that neither the scoldings of other people … nor my own reflections … have been able to stop me from following this natural impulse that God gave me. He alone must know why; and He knows too that I have begged him to take away the light of my understanding, leaving only enough for me to keep His law, for anything else is excessive in a woman, according to some people. And others say it is even harmful.
I read this quote in Contact by Carl Sagan. It was written by Juana Inés de la Cruz in her Reply to the Bishop of Puebla in 1691. The Bishop had attacked her scholarly work as being inappropriate for a woman; while he claimed to agree with her views, he didn’t think them appropriate for her sex. Rather than writing, she should devote her life to prayer, an endeavor much more suitable for a woman.
Aargh. 17th century clergymen are just the worst.
We’d better talk about this amazing woman then:
Juana Inés de la Cruz lived in (what was then New Spain but what is now) Mexico in the 17th century and was a self-taught polyglot (my favorite type of inspirational people). She studied scientific thought and philosophy, she was a composer and a poet, all in an age long before women were allowed to do anything involving using their brain.
If we may believe the stories, Juana started teaching herself at a young age by hiding to read her grandfather’s books. She supposedly learned how to read and write Latin at the age of three. At the age of 16 she had asked her mother’s permission to disguise herself as a man so she could study in some avant la lettre version of She’s the Man; but her mom wouldn’t let her so she had to continue to study in secret. But by then, she already knew Latin, Greek, Nahuatl, and accounting – the most important language of them all *ahem*.
In 1669, she became a Hieronymite nun so she could study in freedom – other monasteries were a lot more strict and wouldn’t allow her to pursue her passion for knowledge, philosophy, and writing. As a nun, she would write on the topics of religion, feminism, and love; often criticizing the hypocrisy of men and defending women’s right to education. In Reply to Sister Philotea, she wrote:
Oh, how much harm would be avoided in our country [if women were able to teach women in order to avoid the danger of male teachers in intimate setting with young female students.]
[Such hazards] would be eliminated if there were older women of learning, as Saint Paul desires, and instructions were passed down from one group to another, as in the case with needlework and other traditional activities.
Okay, now I’m imagining needlework maps of our universe. That’d be cool.
Unfortunately, this story has an unhappy ending. According to some sources, rather than being censored by the church, Sister Juana decided to stop writing and sell all her books, musical instruments, and scientific tools. Other sources claim that her belongings were confiscated by the bishop due to her defiance towards the church. Nevertheless, a lot of her writings have gone lost and soon later, in 1695, after caring for other nuns with the plague.
There’s a lot of humdrum about inspirational women, in science or not, nowadays. With a lot of books with inspiring stories, such as Rachel Ignotofsky’s Woman in Science, finding empowering role models has never been easier. And I love it. Showing an as diverse possible range of inspirational historical figures provides everyone with role models than can identify with and aspire to. However, I have noticed that my knowledge of inspirational women is primarily European-based. So this is me trying to change that.
Sometimes I feel like I was born in the wrong era.
Usually, this feeling is music-related. Now that I have renewed access to my dad’s old record collection (and a record player, #Hipster), I can’t help but feel that rock music from the ’70s and ’80s surpasses anything being made now. Comparing music from the “olden days” to music now is of course not entirely fair; what still remains has already withstood the test of time, current music hasn’t had to (yet).
Music aside, my wrong-time-feeling also applies to how I feel about science and research. Nowadays, scientific discoveries seem to always be the result of hard work of an entire team of scientists for countless years. There is so much knowledge and information out there, it seems imperative to find one’s own little niche and specialise, specialise, specialise. It is impossible to be a master of all.
However, I long for the golden old days of the polymaths and the homines universalis when academics were interested in all fields. They were allowed, or even required, to branch out, study all sciences, not to mention humanities, linguistics and arts. I’m speaking of people like Galileo Galilei and Leonardo Da Vinci. My favourite person, D’Arcy Thompson, would also be considered a polymath.
A polymath is defined as someone with “knowledge of various matters, drawn from all kinds of studies ranging freely through all the fields of the disciplines, as far as the human mind, with unwearied industry, is able to pursue them” (1). I noticed while perusing the wikipedeia page, that the examples given of RenaissanceMen are indeed all men. Even if I was born in the right era to be an homo universalis, I would still have been born the wrong gender.
However, there are at least a few examples of female polymaths, and I wanted to introduce you to one of them:Dorothy Wrinch. Just in case you wanted a more nuanced example.
Dorothy Maud Wrinch (12 September 1894 – 11 February 1976)
Dorothy Wrinch was a mathematician by training but also showed interest in physics, biochemistry and philosophy. She is someone who – even though I’ve only recently heard of her – is an excellent example of the homo universalis I wish I could be. She was also a friend of D’Arcy Thompson, though if I remember correctly, they mostly upheld a written correspondence.
In any case, Dorothy is known for her mathematical approaches to explaining biological structures, such as DNA and proteins. Most notably, she proposed a mathematical model for protein structure that – albeit later disproved – set the stage for biomathematical approaches to structural biology, and mathematical interpretations of X-ray crystallography.
She was a founding member of the Theoretical Biology Club, a group of scientists who believed that an interdisciplinary approach of philosophy, mathematics, physics, chemistry and biology, could lead to the understanding and investigation of living organisms.
She is described as “a brilliant and controversial figure who played a part in the beginnings of much of present research in molecular biology. (…) I like to think of her as she was when I first knew her, gay, enthusiastic and adventurous, courageous in face of much misfortune and very kind.” (2)
Actually, come to think of it, maybe Dorothy was born in the wrong era. Nowadays, using mathematical approaches to protein structure is practically commonplace. Though I’m not quite sure how well philosophy would fit in.
Anyway, I still feel that interdisciplinary research, and having broad interests, is not the easiest path to go down. But as long as we have inspirational people to look up to, past and present, we know it is worth a try.
(Wow, went way overboard with the #Inspirational stuff towards the end there.)
(1) As defined by Wower, from Wikipedia.
(2) Dorothy Crowfoot Hodgkin (Wrinch’s obituary in 1976). An updated version of this post was published on the Marie Curie Alumni Association blog on March 19, 2019.
To end my series of posts on the man and the book (D’Arcy Thomspon and On Growth and Form respectively, the latter a book with over 1000 pages), I wanted to share a few more quotes from and about him that I found interesting enough to type out:
“In his figure and bearded face there was majestic presence; in is hospitality there were openness, kindness and joviality; in his ever quick wit were the homely, the sophisticated and, at times, the salty… in status he became a very doyen among professors the world over; in his enquiring mind he was like those of whose toungue and temper he was a master, the Athenians of old, eager ‘to tell or hear some new thing'” – Professor Peacock (1)
With the name Professor Peacock, I can’t help but imagine a flamboyant, multicolour-labcoat-wearing, frizzle-haired man…
I hope the meaning of the word salty has changed over time…
There is a certain fascination in such ignorance; and we learn without discouragement that Science is “plutot destine a etudier qu’a connaitre, a chercher qu’a trouve la verite.” (2)
(Rather than destined to study for knowledge, (we are) searching to find the truth.)
#IgnoranceIsBliss?
In my opinion the teaching of mechanics will still have to begin with Newtonian force, just as optics begins in the sensation of colour and thermodynamics with the sensation of warmth, despite the fact that a more precise basis is substituted later on. (3)
As a self-proclaimed science communicator, it is often difficult to judge how much to simplify things. On the other hand, making things relatable to everyday experiences does not necessarily mean telling untruths. Classical physics may not be valid for every single situation, but it is often enough to describe what is happening without needing to resort to more complicated relative physics. And you don’t have to start quoting wavelengths when a colour description would do just as well. Fill in the details later, if necessary.
Some quotes on evolution and natural selection:
And we then, I think, draw near to the conclusion that what is true of these is universally true, and that the great function of natural selection is not to originate, but to remove. (4)
Unless indeed we use the term Natural Selection in a sense so wide as to deprive it of any purely biological significance; and so recognise as a sort of natural selection whatsoever nexus of causes suffices to differentiate between the likely and the unlikely, the scarce and the frequent, the easy and the hard: and leads accordingly, under the peculiar conditions, limitations and restraints which we call “ordinary circumstances,” one type of crystal, one form of cloud, one chemical compound, to be of frequent occurrence and another to be rare. (5)
We can move matter, that is all we can do to it. (6)
On a fundamental level, are we really able to build things? Aren’t we just rearranging the building blocks?
I know that in the study of material things, number, order and position are the threefold clue to exact knowledge; that these three, in mathematician’s hands, furnish the “first outlines for a sketch of the universe“, that by square and circle we are helped, like Emile Verhaeren’s carpenter, to conceive “Les lois indubitable et fecondes qui sont la regle et la clarte du monde.” (7)
(The unquestionable and fruitful laws that rule and clarify the world.)
For the harmony of the world is made manifest in Form and Number, and the heart and soul and all the poetry of Natural Philosophy are embodied in the concept of mathematical beauty. (8)
Delight in beauty is one of the pleasures of the imagination … (9)
#MathIsLife. Thank you, D’Arcy, for the 1000+ pages of mind-expanding, educational and philosophical topics.
Sources:
(1) D’Arcy Thompson and his zoology museum in Dundee – booklet by Matthew Jarron and Cathy Caudwell, 2015 reprint
(2) On Growth and Form – p. 19
(3) Max Planck
(4) On Growth and Form – p. 269-270
(5) On Growth and Form – p. 849
(6) Oliver Lodge
(7) On Growth and Form – p. 1096
(8) On Growth and Form – p. 1096-1097
(9) On Growth and Form – p. 959
(2, 4-6, 8-9) from D’Arcy Thompson, On Growth and Form, Cambridge university press, 1992 (unaltered from 1942 edition)
Neat process diagrams of metabolism always gave the impression of some orderly molecular conveyer belt, but the truth was, life was powered by nothing at the deepest level but a sequence of chance collisions. (1)
Zoom down far enough (but not too far – or the Aladdin merchant might complain) and all matter is just a soup of interacting molecules. Chance encounters and interactions, but with a high enough probability to happen. In essence, life is a series of molecular interactions (that, in turn, are atomic interactions and so on and so on…)
The form of the cellular framework of plants and also of animals depends, in its essential features, upon the forces of molecular physics. (2)
Quite often, we can ignore those small-scale phenomena, but only as long as the system we are describing is large enough. As in physics, in biological systems size does matter (*insert ambiguous joke here*). We have to adapt the governing physical rules depending on the scale that we are observing. Do we consider every quantum-biological detail, can we use a cell as the smallest entity or even use whole organisms as the smallest functional entity?
Life has a range of magnitude narrow indeed compared to with which physical science deals; but it is wide enough to include three such discrepant conditions as those in which a man, an insect and a bacillus have their being and play their several roles. Man is ruled by gravitation, and rests on mother earth. A water-beetle finds the surface of a pool a matter of life and death, a perilous entanglement or an indispensable support. In a third world, where the bacillus lives, gravitation is forgotten, and the viscosity of the liquid, the resistance defined by Stoke’s law, the molecular shocks of the Brownian movement, doubtless also the electric charges of the ionised medium, make up the physical environment and have their potent and immediate influence on the organism. (3)
Observing life at the smallest scales (by which I mean cells and unicellular organisms) at least has the advantage the rules driving form and structure can, at least in many cases, be considered relatively simple: surface-tension.
In either case, we shall find a great tendency in small organisms to assume either the spherical form or other simple forms related to ordinary inanimate surface-tension phenomena, which forms do not recur in the external morphology of large animals. (4)
While on the topic of size, as many things in the universe: size is relative. I have noticed in conversations with colleagues and supervisors that what is considered small or large, definitely depends on the point of perspective (and often: whatever the size is that that person typically studies). I could assume that for a zoologist, a mouse is a small animal, but tell a microscopist they have to image an area of 1 mm² and the task seems monstrous. For a particle physicist, a micrometre is immense, but for an astrophysicist, the sun is actually quite close.
We are accustomed to think of magnitude as a purely relative matter. We call a thing big or little with reference with what it is wont to be, as when we speak of a small elephant of a large rat; and we are apt accordingly to suppose that size makes no other or more essential difference. (5)
Undoubtedly philosophers are in the right when they tell us that nothing is great and little otherwise than by comparison. (6)
There is no absolute scale of size in the Universe, for it is boundless towards the great and also boundless towards the small. (5)
That’s the amazing thing about science: we strive to understand the universe on all scales. The universe is mindblowing in its size, in both directions on the length scale.
We distinguish, and can never help distinguishing, between the things which are at our own scale and order, to which our minds are accustomed and our senses attuned, and those remote phenomena which ordinary standards fail to measure, in regions where there is no habitable city for the mind of man. (7)
Good thing we have scientists, amazing minds, capable of studying, visualising and even starting to understand the universe on all its scales…
(1) Permutation city – Greg Egan, p. 67
(2) Wildeman
(3) On Growth and Form – p. 77
(4) On Growth and Form – p. 57
(5) Gulliver
(6) On Growth and Form – p. 24
(7) On Growth and Form – p. 21
(3-4, 6-7) from D’Arcy Thompson, On Growth and Form, Cambridge university press, 1992 (unaltered from 1942 edition)