Category Archives: Science

Dark Matter by Blake Crouch

Quantum mechanics is weird. Probably the weirdest part is that it only makes predictions about the probability of what can happen. Newton’s laws say that, with a give force, an object will move this way. Quantum mechanics says that it is probable that it will move a given way, but there is a probability it will move another way. So, with quantum mechanics, we are always dealing with probability. When we measure something, one of the many choices is actually realized. But, this is at the heart of the weirdness: which one?

There are several interpretations of quantum mechanics that try to address this, but they are all, essentially, non-testable hypotheses. One is that all possibilities happen and we are living in one of those potential worlds. That is, whenever a quantum measurement is made, reality splits into different worlds, where each possibility has happened. This is the many-worlds interpretation of quantum mechanics. This splitting occurs whenever there is a measurement of a quantum system. Going even further, the many-minds interpretation says that any time a mind makes a decision, reality splits. It is this interpretation that is at the heart of Blake Crouch’s Dark Matter.

Really, the many-minds view of quantum mechanics is just the backdrop, a vehicle to let Crouch explore ideas about the road not taken. We all have wondered “what if,” what if I had asked that girl out, what if I had gotten that other job, what if that special someone was still alive? We only get one chance at life and we make the best of it. But, what if there was a chance to redo it, to take that untaken road? Crouch’s main character, Jason, had promise as a brilliant physicist. His wife, Dani, was an up and coming star artist. However, they both put those plans aside when Dani becomes pregnant, to raise their son. While both are happy, both also have regrets. What if they had made different choices?

I won’t give away the plot, as there is a lot of daring-do and action to go along with the exploration of these themes. I’ll just say that, in the end, Jason learns a few important lessons:

  • And maybe I can let go of the sting and resentment of the path not taken, because the path not taken isn’t just the inverse of who I am. It’s an infinitely branching system that represents all the permutations of my life…
  • I thought I appreciated every moment, but sitting here in the cold, I know I took it all for granted. And how could I not? Until everything topples, we have no idea what we actually have, how precariously and perfectly it all hangs together.

Dark Matter uses some out-there physics to explore some fundamental questions of existence, doing so while telling an action packed story that has some really interesting plot twists. Crouch’s approach to writing took inspiration from Michael Crichton: “I realized that he wasn’t just coming up with cool plots. He was writing books that allowed him to explore topics that interested him. Writing a thriller as self-education.” And, by educating himself, Crouch provides a yarn that is both thought-provoking and full of action.

Thirteen by Henry SF Cooper Jr

When I saw the movie Apollo 13, the one that stars Tom Hanks, I felt that the story it depicted was the embodiment of engineering to me. How a group of clever people could solve an unsolvable problem, fixing a tiny spacecraft that had malfunctioned on its way to the moon with nothing more than the parts they had on hand as they hurtled through space and the combined ingenuity of probably literally hundreds of people — 3 in that capsule and the rest on the ground. The way they systematically tackled the problem but also brought in their own out-of-the-box thinking was, for me, what engineering was all about.

(In reality, often engineering, at least at the professional scale, is not so dramatic. I almost became an engineer — this was long before the movie — but an internship at a major computer company killed that desire.)

The book Thirteen, but Henry SF Cooper Jr, recounts the Apollo 13 mission in minute detail. Cooper has scoured the logs and transcripts of the mission, interviewed many sources, and has essentially produced what feels like a step-by-step account of the mission and how they fixed the spacecraft to get it back to earth. There is no embellishments here, no extra drama. Cooper recounts what people said and, when possible, what they thought, based on their own words. He doesn’t add extra drama from what is in the record. At times, his recounting of the failure, the way everyone works to first understand and then fix what happened, and the actual return to earth, can seem a little dry. But, its factual narrative makes the actual events that much more impressive because, simply, that’s what happened.

Cooper notes multiple times that one reason that the spacecraft failed is that, simply, NASA couldn’t imagine certain things happening. As has been said about the space shuttle disasters, maybe there was some hubris, some overconfidence, in the teams. After all, they had landed multiple men on the moon by that point. They could do anything. And nothing was wrong with their designs. If anywhere, this is where Cooper lets his own biases creep in, as he clearly feels that NASA had gotten too complacent, too proud to even think of such eventualities: “they felt secure in the knowledge that the spacecraft was as safe a machine for flying to the moon as it was possible to devise. Obviously, men would not be sent into space in anything less.” At one point, he highlights how the astronauts complained about being put through simulations of such unrealistic scenarios. After the events of Apollo 13, they complained no more, at least not for a while.

The detail Cooper provides on all aspects of the mission instills a sense of wonder at how complex these missions were. To stay warm, they had to roll the spacecraft regularly to change which part faced the sun so that the electronics wouldn’t get so cold. They had to enter the atmosphere at just the right angle: too shallow, and they would bounce off; too steep and they would burn up as they approached earth. With the failing of the spacecraft, they had to figure this all out in real time again, with lots of uncertainties as to the true behavior of the ship.

Getting the astronauts home was one engineering challenge after another. Once the spacecraft failed, the very first problem they had to solve was how to keep the astronauts breathing, as their oxygen production had stopped. They also needed water but, something I learned, in space, astronauts don’t feel thirst, even when they are dehydrated, so they didn’t realize how low they were on water. The communications equipment of the lunar module, not meant to be active until they were on the moon, interfered with other devices on the spacecraft, impeding, at least initially, communications with earth. The fuel for the spacecraft was radioactive material. Normally, it would have been left on the moon. Now, they had to worry about where it would fall on the earth.

Thirteen doesn’t have the same sense of drama as a movie, but in some ways, it is all the better at conveying the impressive feats of these people as they got the spaceship home. This book won’t be for everyone, but for those with any inclination towards engineering, it provides a great sense of the drama that the profession can entail, in the right circumstances.

Boltzmann’s Atom by David Lindley

It might be hard to imagine now, but at the end of the 1800s, the scientific community was beginning to think it had more or less wrapped up all-things physics. Newton’s mechanics were well understood and Maxwell had recently shown how light behaved as a wave, giving a unified theory of electricity and magnetism. Little could they all imagine that everything would be turned on its head in just a few short years.

Presaging this transformation of physics was Ludwig Boltzmann, who was one of the leading figures of what we now call statistical mechanics. He showed how we could move beyond treating individual particles and think about them as large groups, think about their average properties. This allowed him to consider the properties of solids and liquids and gases on a scale that more directly connects with every day life. Maybe most importantly, he showed how the properties of these groups, or ensembles, of particles connected to the concept of entropy, which was a fairly vague concept before him. His impact to physics is immeasurable, and is enshrined in various concepts that bear his name, not least amongst them being the Maxwell-Boltzmann distribution and the Boltzmann factor.

However, during his life, his ideas were not so quick to catch on and were, in particular, challenged by people like Ernst Mach (yes, the one who devised the Mach scale of speed), who exposed a view that physics should only describe what is directly observed, that there was no room for theorizing what caused those observations. That, combined with Boltzmann’s relative isolation in a small university and his own near-schizophrenia with his status, led to a relatively slow acceptance of his ideas. Boltzmann’s ideas had, at their core, the concept of atoms, a concept that was not at all widely accepted during his time. As late as 1897, leading scientists such as Mach could exclaim “I don’t believe that atoms exist!”

Of course, there were theories of atoms before Boltzmann, dating back to the ancient Greeks. Lindley traces the development of our theories of atomic particles and Boltzmann’s contributions to those theories. Boltzmann’s own theoretical advances made predictions, based on the assumption of atoms, that were later validated and helped conclusively show that atoms do indeed exist (Einstein also played a critical role with his theory of Brownian motion). That Boltzmann’s assumptions were fundamentally correct was not a given, and that they led to predictions that agreed with observation did not prove them to be true. As Lindley notes: “You make an assumption and explore the consequences. This is exactly what scientists continue to do today, and the fact that a certain assumption leads to all kinds of highly successful predictions and explanations does not, strictly speaking, prove that the original assumption is correct.

Lindley’s portrait of Boltzmann is both the story of a man who had some very profound personal issues and the history of a branch of science that presaged the quantum revolution. The story of the advancement of science is fascinating in its own right, as generations of scientists tried to tease out what, at the microscopic scale, was driving the macroscopic observations we make every day. Lindley describes the scientific environment of the time, in which a few big heavyweights dominated the discourse. A single scientist, working in relative isolation, like Boltzmann, could make huge impacts on his field. This isn’t as true today, where we seem to be delving more into details than bigger swaths of truth. Not that there aren’t any new big truths to discover, but rather that the kinds of technological advances that are rewarded demand digging into the details. And, the democratization of science — the shifting of science being a rich-man’s hobby to a true profession pursued by large armies of people — have made it so that it is harder to stand out in the proverbial field.

As for himself, Boltzmann was never happy. He always desired more recognition for his achievements and that typically meant moving to bigger and better positions at other universities. However, the moment he accepted such a position, he was riddled with doubts and often tried to undo the appointment. Particularly in Austria, where appointments were at least brought to the attention of the royals, this led to some level of infamy for the poor man. His vacillations were likely a reflection of some deeper level of depression or other mental condition, as he ultimately took his own life.

Finally, Lindley also provides some metacommentary on the scientific process itself. This is both through the continual argument between people like Mach and Boltzmann (Mach particularly disliked theorizing, stating, for example, that “the object of natural science is the connection of phenomena, but theories are like dry leaves which fall away when they have ceased to be the lungs of the tree of science“) as well as his own observations: “Science demands an element of creativity, and an element of faith. The creativity comes in thinking up hypotheses and theories that no one has ever thought of before. The faith comes in thinking that these hypotheses, when shown to be useful or successful in some way, bear a relation to what is loosely called reality.” Late in his life, Boltzmann, spurred on by the attacks by Mach and his followers, turned toward philosophy, in an attempt to understand the nature of truth. However, he never really became a philosopher, telling a colleague “Shouldn’t the irresistible urge to philosophize be compared to the vomiting caused by migraines, in that something is trying to struggle out even though there’s nothing inside?

Altogether, Boltzmann’s Atom is an excellent portrait of a man and of an era in science. It is a lesson in how science advances, not necessarily through the unstoppable march of progress, but in fits and spurts as different personalities come and go. Boltzmann himself is an intriguing figure that bridges two different eras of science. We tend to forget both that science doesn’t always follow an obvious linear path in the search for understanding and that the people that push it forward are human with very human foibles. Boltzmann’s Atom reminds us of both.

Spillover by David Quammen

Many, if not the majority, of human diseases are what are called zoonotic. They don’t originate in humans but pass to humans from animals. Ebola, the common flu (“…wild aquatic birds are now known to be the ultimate origin of all influenzas…“), and AIDS are just a few examples. It seems that the rate at which such diseases are making their way to the human population is increasing. In his book Spillover, David Quammen delves deep into this question, traveling the globe, interviewing leading scientists, and witnessing first-hand our response to these diseases.

Spillover is one of those “popular” science books that it would behoove everyone to read. It is accessible, boiling the science down to the essentials and describing it in a way that is understandable. More importantly, it discusses a topic that is often in the news and which will only become more prevalent and newsworthy as time passes. Further, the types of diseases that Quammen investigates are becoming more common and understanding why and how may┬ábe a critical step in preventing a future epidemic.

This is one of those books where I was constantly highlighting passages, as I was continually learning something new, interesting, and, seemingly, important. Quammen is a field reporter, having worked with publications such as National Geographic. He doesn’t just report but he participates in his stories. As such, he has a multitude of anecdotes to share. Some of them are very enlightening as he describes, for example, how he helped capture bats to test them for various viruses. Once or twice, his penchant for storytelling goes a little astray, for instance when he hypothesizes the voyage of one of the first HIV positive men in Africa traveling along the river and settling in a city where he spreads the disease. In these instances, I grew a bit impatient and wished he would get to the science. But, overall, his vibrant descriptions broke up what might have been a dull narrative and certainly gives it flavor.

Quammen describes the basics of zoonotic diseases. They have some animal host, that often is unaffected by the pathogen. Often there is an amplifier host, another animal in which the virus can quickly replica and from which it can be quickly injected into the environment and find its way to humans. He goes in to detail about numerous zoonotic outbreaks and how scientists trace the origins of those outbreaks and try to develop vaccines for them. This effort reads like a detective mystery, as scientists have to piece together very fragmentary bits of evidence to build a picture of what is going on. And this is a hard problem. We still don’t know what the host animal is for the Ebola virus, despite the effort put into identifying it.

One of the central themes of Quammen’s book, one that he raises multiple times but doesn’t really beat to death, is that the reason that zoonotic diseases are on the rise is because humans are continuously disrupting the habitat of animals. By deforestation and construction, we encroach into new areas where animal hosts with these diseases have lived maybe for millennia and now we are exposed to them and their diseases. It is our increased interaction with remote species that seems to be the driver. As Quammen puts it “Human-caused ecological pressures and disruptions are bringing animal pathogens ever more into contact with human populations, while human technology and behavior are spreading those pathogens ever more widely and quickly.” By the latter, he means our ease of global travel, where a virus originating in China can make its way through Hong Kong and then to Europe and the Americas before anyone comes down with symptoms. There are also a lot more of us than there used to be. From the planet’s point of view, we might be one of the more successful pathogens… As Quammen summarizes the situation “Ebola virus is not in your habitat. You are in its.” Further, climate change may exacerbate the problem. Drier climates can lead to easier transmission of pathogens as they can carry on the air easier.

Often, the host species are bats. This surprised me to some degree. Why would bats be special? This is a question that is still being understood. Bats are very abundant. One quarter of all mammal species on the planet are bats. They also behave very differently from other mammals in that they can fly long distances and roost together in large communities, enabling the transmission of pathogens. However, the role of bats as hosts seems greater than their shear number. There is something about viruses that are harmless to bats, and so they carry them, that are deadly to humans. This is still an open scientific question.

Quammen goes through the basics of disease science: how scientists model disease spread, how we characterize diseases, and how we use that information to trace the origins of the disease. He provides numerous examples of how we have been able to isolate the nature of the specific virus that is attacking humans and where it might have originated from. AIDS is an interesting example. There are lots of internet rumors and stories about where AIDS started. However, I have to be honest: I have never heard about the current best theory. My knowledge is clouded by the false information or earlier false leads and the current best hypothesis never made it to me. For those who are interested, it seems that the best hypothesis now is that an African hunter likely killed an infected chimp, for meat, way back in the early 1900s, maybe about 1908, way before we first identified the disease in humans. That is, it has been circulating in humans for the better part of at least 50 years before we even realized what was going on.

There are also interesting dynamics between the different levels of animals involved in the spread of disease. Take ticks and Lyme disease as an example. Ticks catch the disease from mammals, most often small rodents, who then pass it on to the next generation of ticks. That is, ticks aren’t born with the disease, they catch it from other animals. As one scientist quoted by Quammen says, “If mammals didn’t make ticks sick, ticks wouldn’t make mammals sick.” Further, the relationship to the animals we might interact with, in this case the deer, is not so obvious. It was thought that culling the deer would make the disease go away. It didn’t. It made it worse, in fact. This is still being studied, but it seems that the more we disrupt the native animal population, the more we increase the risk of spreading these diseases.

To reemphasize the central tenet of Spillover, the increase in these disease is a direct consequence of humans’ expansion across the globe. However, there are a lot of complex and interconnected factors at multiple levels that drive the spillover of a disease from the animal host to humans. “Habitat disturbance, bushmeat hunting, the exposure of humans to unfamiliar viruses that lurk in animal hosts — that’s ecology. Those things happen between humans and other kinds of organisms, and are viewed in the moment. Rates of replication and mutation of an RNA virus, differential success for different strains of the virus, adaptation of the virus to a new host — that’s evolution. It happens within a population of some organism, as the population responds to its environment over time.

Another theme that Quammen emphasizes is that, while we can, with enough diligence, understand and potentially control an outbreak, we cannot predict where the next one will be. There are too many things we simply don’t — and can’t — know, to make such a prediction. Thus, we need high levels of readiness to respond when the next outbreak occurs. “If we can’t predict a forthcoming influenza pandemic or any other newly emergent virus, we can at least be vigilant; we can be well-prepared and quick to respond; we can be ingenious and scientifically sophisticated in the forms of our response.”

Algorithms to Live By by Brian Christian and Tom Griffiths

In some real sense, computers are like brains. They take information in, process it in some way, and try to make sense of it. A key difference is that, with computers, we can explicitly lay out all of the rules for processing that information. For brains, the rules are already there, we can just try to figure out what they are. The central thesis of Brian Christian and Tom Griffiths’ book Algorithms to Live By: What Computers Can Teach Us About Solving Human Problems is that, by looking at how computers can be programmed to solve problems and what kinds of problems are easy and hard, we can learn something about how brains do the same.

Christian and Griffiths go systematically through a series of problem types that are central to computer science and applied math and describe how the insights into those problems give us insight into how brains handle information. One of their first examples relates to decision making. Say you have a choice you need to make from a pool of options — who to get married to, what house to buy, which secretary to hire. The basic conundrum is this: you want to make sure you get enough data to make an intelligent choice — you want to know that your choice is really a good one by comparing it to the other options — but the more information you gather, the longer you wait, the more likely the best one has already come and gone. So, you need to wait for some time to judge the quality of the pool and each candidate relative to the pool, but you can’t wait too long or you miss the best one. Under some assumptions, applied math has solved versions of this problem, a class of problems called “optimal stopping” problems. It turns out that, under certain conditions, the optimal stopping point is 37%. That is, you should use the first 37% of your options to help you build your knowledge base about the pool, and not choose any of them. But, you should choose the very first person after that 37% that is better than any of those in the first 37%. This maximizes your chances of choosing the very best person. You aren’t guaranteed to get the very best with this algorithm, but you have the best chance of getting the best.

This is just one example that Christian and Griffiths use to draw analogies between computer science and human thinking. They delve into a variety of topics:

  • Exploring versus Exploiting. Related to optimal stopping, this is the problem of relying on something you already know well versus trying out something new, such as a restaurant.
  • Sorting. If you have a large amount of information, how is it best to sort through it all.
  • Caching. Again, if you have a lot of information, how do you deal with it in the first place? How do you get the information you need now when you can’t have all of the information at your fingertips?
  • Scheduling. If you have a full to-do list, how do you optimize the best way of getting through your list? Do you want to keep the list as short as possible? Do you want to minimize how long others have to wait for you?
  • Bayes’s Rule. How do you use what you know now to make estimates about what will happen next?
  • Overfitting. What are the dangers of overthinking a problem?
  • Relaxation. Given a hard problem, how do you even begin to solve it? How do you find the best answer?
  • Randomness. When you have a huge problem, with a lot of data, so much that you can’t look at all of it, how do you figure out what it says? Think of polling.
  • Networking. In a large, interconnected world, how do you share information with everyone else?
  • Game Theory. How do we make choices when our choices involve other people and their choices?

All of these topics not only have direct relevance to how we program computers to work for us, to solve hard problems that computers are better at, but also give insight into how we can organize our own thinking and data processing. With the internet, 24-hour cable news, and ever-increasing media presence, the amount of information we are bombarded with continues to grow. Our lives become busier as we juggle work, our child’s soccer schedule, the maintenance we have to do on our house, our social lives. A lot of what we do is process information and try to make some sense out of it. While computer algorithms often don’t provide silver bullets — in fact, some problems are simply not solvable, at least not in a finite amount of time — they provide some insight into how to think about certain types of problems.

Algorithms to Live By provides a nice introduction into some of the problems of computer science in a way that is easily approachable. And, if the problems Christian and Griffiths describe might offer some insight into how our own brains work, at the same time, by making that connection between computers and us, they make the problems of computer science more relatable. That is, they provide an accessible pathway to learning about computer science and how we solve some of the biggest problems in computer science. Given the ubiquity of computers in our lives, it certainly doesn’t hurt to know more about how those machines work.