JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Viewing: Blog Posts Tagged with: physics, Most Recent at Top [Help]
Results 26 - 50 of 73
How to use this Page
You are viewing the most recent posts tagged with the words: physics in the JacketFlap blog reader. What is a tag? Think of a tag as a keyword or category label. Tags can both help you find posts on JacketFlap.com as well as provide an easy way for you to "remember" and classify posts for later recall. Try adding a tag yourself by clicking "Add a tag" below a post's header. Scroll down through the list of Recent Posts in the left column and click on a post title that sounds interesting. You can view all posts from a specific blog by clicking the Blog name in the right column, or you can click a 'More Posts from this Blog' link in any individual post.
The aim of physics is to understand the world we live in. Given its myriad of objects and phenomena, understanding means to see connections and relations between what may seem unrelated and very different. Thus, a falling apple and the Moon in its orbit around the Earth. In this way, many things “fall into place” in terms of a few basic ideas, principles (laws of physics) and patterns.
As with many an intellectual activity, recognizing patterns and analogies, and metaphorical thinking are essential also in physics. James Clerk Maxwell, one of the greatest physicists, put it thus: “In a pun, two truths lie hid under one expression. In an analogy, one truth is discovered under two expressions.”
Indeed, physics employs many metaphors, from a pendulum’s swing and a coin’s two-sidedness, examples already familiar in everyday language, to some new to itself. Even the familiar ones acquire additional richness through the many physical systems to which they are applied. In this, physics uses the language of mathematics, itself a study of patterns, but with a rigor and logic not present in everyday languages and a universality that stretches across lands and peoples.
Rigor is essential because analogies can also mislead, be false or fruitless. In physics, there is an essential tension between the analogies and patterns we draw, which we must, and subjecting them to rigorous tests. The rigor of mathematics is invaluable but, more importantly, we must look to Nature as the final arbiter of truth. Our conclusions need to fit observation and experiment. Physics is ultimately an experimental subject.
Physics is not just mathematics, leave alone as some would have it, that the natural world itself is nothing but mathematics. Indeed, five centuries of physics are replete with instances of the same mathematics describing a variety of different physical phenomena. Electromagnetic and sound waves share much in common but are not the same thing, indeed are fundamentally different in many respects. Nor are quantum wave solutions of the Schroedinger equation the same even if both involve the same Laplacian operator.
Along with seeing connections between seemingly different phenomena, physics sees the same thing from different points of view. Already true in classical physics, quantum physics made it even more so. For Newton, or in the later Lagrangian and Hamiltonian formulations that physicists use, positions and velocities (or momenta) of the particles involved are given at some initial instant and the aim of physics is to describe the state at a later instant. But, with quantum physics (the uncertainty principle) forbidding simultaneous specification of position and momentum, the very meaning of the state of a physical system had to change. A choice has to be made to describe the state either in terms of positions or momenta.
Physicists use the word “representation” to describe these alternatives that are like languages in everyday parlance. Just as with languages, where one needs some language (with all equivalent) not only to communicate with others but even in one’s own thinking, so also in physics. One can use the “position representation” or the “momentum representation” (or even some other), each capable of giving a complete description of the physical system. The underlying reality itself, and most physicists believe that there is one, lies in none of these representations, indeed residing in a complex space in the mathematical sense of complex versus real numbers. The state of a system in quantum physics is in such a complex “wave function”, which can be thought of either in position or momentum space.
Either way, the wave function is not directly accessible to us. We have no wave function meters. Since, by definition, anything that is observed by our experimental apparatus and readings on real dials, is real, these outcomes access the underlying reality in what we call the “classical limit”. In particular, the step into real quantities involves a squared modulus of the complex wave functions, many of the phases of these complex functions getting averaged (blurred) out. Many so-called mysteries of quantum physics can be laid at this door. It is as if a literary text in its ur-language is inaccessible, available to us only in one or another translation.
What we understand by a particle such as an electron, defined as a certain lump of mass, charge, and spin angular momentum and recognized as such by our electron detectors is not how it is for the underlying reality. Our best current understanding in terms of quantum field theory is that there is a complex electron field (as there is for a proton or any other entity), a unit of its excitation realized as an electron in the detector. The field itself exists over all space and time, these being “mere” markers or parameters for describing the field function and not locations where the electron is at an instant as had been understood ever since Newton.
Along with the electron, nearly all the elementary particles that make up our Universe manifest as particles in the classical limit. Only two, electrically neutral, zero mass bosons (a term used for particles with integer values of spin angular momentum in terms of the fundamental quantum called Planck’s constant) that describe electromagnetism and gravitation are realized as classical electric and magnetic or gravitational fields. The very words particle and wave, as with position and momentum, are meaningful only in the classical limit. The underlying reality itself is indifferent to them even though, as with languages, we have to grasp it in terms of one or the other representation and in this classical limit.
The history of physics may be seen as progressively separating what are incidental markers or parameters used for keeping track through various representations from what is essential to the physics itself. Some of this is immediate; others require more sophisticated understanding that may seem at odds with (classical) common sense and experience. As long as that is kept clearly in mind, many mysteries and paradoxes are dispelled, seen as artifacts of our pushing our models and language too far and “identifying” them with the underlying reality, one in principle out of reach. We hope our models and pictures get progressively better, approaching that underlying reality as an asymptote, but they will never become one with it.
Headline Image credit: Milky Way Rising over Hilo by Bill Shupp. CC-BY-2.0 via shupp Flickr
René Descartes wrote his third book, Principles of Philosophy, as something of a rival to scholastic textbooks. He prided himself in ‘that those who have not yet learned the philosophy of the schools will learn it more easily from this book than from their teachers, because by the same means they will learn to scorn it, and even the most mediocre teachers will be capable of teaching my philosophy by means of this book alone’ (Descartes to Marin Mersenne, December 1640).
Still, what Descartes produced was inadequate for the task. The topics of scholastic textbooks ranged much more broadly than those of Descartes’ Principles; they usually had four-part arrangements mirroring the structure of the collegiate curriculum, divided as they typically were into logic, ethics, physics, and metaphysics.
But Descartes produced at best only what could be called a general metaphysics and a partial physics.
Knowing what a scholastic course in physics would look like, Descartes understood that he needed to write at least two further parts to his Principles of Philosophy: a fifth part on living things, i.e., animals and plants, and a sixth part on man. And he did not issue what would be called a particular metaphysics.
Descartes, of course, saw himself as presenting Cartesian metaphysics as well as physics, both the roots and trunk of his tree of philosophy.
But from the point of view of school texts, the metaphysical elements of physics (general metaphysics) that Descartes discussed—such as the principles of bodies: matter, form, and privation; causation; motion: generation and corruption, growth and diminution; place, void, infinity, and time—were usually taught at the beginning of the course on physics.
The scholastic course on metaphysics—particular metaphysics—dealt with other topics, not discussed directly in the Principles, such as: being, existence, and essence; unity, quantity, and individuation; truth and falsity; good and evil.
Such courses usually ended up with questions about knowledge of God, names or attributes of God, God’s will and power, and God’s goodness.
Thus the Principles of Philosophy by itself was not sufficient as a text for the standard course in metaphysics. And Descartes also did not produce texts in ethics or logic for his followers to use or to teach from.
These must have been perceived as glaring deficiencies in the Cartesian program and in the aspiration to replace Aristotelian philosophy in the schools.
So the Cartesians rushed in to fill the voids. One could mention their attempts to complete the physics—Louis de la Forge’s additions to the Treatise on Man, for example—or to produce more conventional-looking metaphysics—such as Johann Clauberg’s later editions of his Ontosophia or Baruch Spinoza’s Metaphysical Thoughts.
Cartesians in the 17th century began to supplement the Principles and to produce the kinds of texts not normally associated with their intellectual movement, that is treatises on ethics and logic, the most prominent of the latter being the Port-Royal Logic (Paris, 1662).
By the end of the 17th century, the Cartesians, having lost many battles, ultimately won the war against the Scholastics.
The attempt to publish a Cartesian textbook that would mirror what was taught in the schools culminated in the famous multi-volume works of Pierre-Sylvain Régis and of Antoine Le Grand.
The Franciscan friar Le Grand initially published a popular version of Descartes’ philosophy in the form of a scholastic textbook, expanding it in the 1670s and 1680s; the work, Institution of Philosophy, was then translated into English together with other texts of Le Grand and published as An Entire Body of Philosophy according to the Principles of the famous Renate Descartes (London, 1694).
On the Continent, Régis issued his General System According to the Principles of Descartes at about the same time (Amsterdam, 1691), having had difficulties receiving permission to publish. Ultimately, Régis’ oddly unsystematic (and very often un-Cartesian) System set the standard for Cartesian textbooks.
By the end of the 17th century, the Cartesians, having lost many battles, ultimately won the war against the Scholastics. The changes in the contents of textbooks from the scholastic Summa at beginning of the 17th century to the Cartesian System at the end can enable one to demonstrate the full range of the attempted Cartesian revolution whose scope was not limited to physics (narrowly conceived) and its epistemology, but included logic, ethics, physics (more broadly conceived), and metaphysics.
Headline image credit: Dispute of Queen Cristina Vasa and René Descartes, by Nils Forsberg (1842-1934) after Pierre-Louis Dumesnil the Younger (1698-1781). Public domain via Wikimedia Commons.
29 November 2012 is the 140th anniversary of the death of mathematician Mary Somerville, the nineteenth century’s “Queen of Science”. Several years after her death, Oxford University’s Somerville College was named in her honor — a poignant tribute because Mary Somerville had been completely self-taught. In 1868, when she was 87, she had signed J. S. Mill’s (unsuccessful) petition for female suffrage, but I think she’d be astonished that we’re still debating “the woman question” in science. Physics, in particular — a subject she loved, especially mathematical physics — is still a very male-dominated discipline, and men as well as women are concerned about it.
Of course, science today is far more complex than it was in Somerville’s time, and for the past forty years feminist critics have been wondering if it’s the kind of science that women actually want; physics, in particular, has improved the lives of millions of people over the past 300 years, but it’s also created technologies and weapons that have caused massive human, social and environmental destruction. So I’d like to revisit an old debate: are science’s obstacles for women simply a matter of managing its applications in a more “female-friendly” way, or is there something about its exclusively male origins that has made science itself sexist?
To manage science in a more female-friendly way, it would be interesting to know if there’s any substance behind gender stereotypes such as that women prefer to solve immediate human problems, and are less interested than men in detached, increasingly expensive fundamental research, and in military and technological applications. Either way, though, it’s self-evident that women should have more say in how science is applied and funded, which means it’s important to have more women in decision-making positions — something we’re still far from achieving.
But could the scientific paradigm itself be alienating to women? Mary Somerville didn’t think so, but it’s often argued (most recently by some eco-feminist and post-colonial critics) that the seventeenth-century Scientific Revolution, which formed the template for modern science, was constructed by European men, and that consequently, the scientific method reflects a white, male way of thinking that inherently preferences white men’s interests and abilities over those of women and non-Westerners. It’s a problematic argument, but justification for it has included an important critique of reductionism — namely, that Western male experimental scientists have traditionally studied physical systems, plants, and even human bodies by dissecting them, studying their components separately and losing sight of the whole system or organism.
The limits of the reductionist philosophy were famously highlighted in biologist Rachel Carson’s book, Silent Spring, which showed that the post-War boom in chemical pest control didn’t take account of the whole food chain, of which insects are merely a part. Other dramatic illustrations are climate change, and medical disasters like the thalidomide tragedy: clearly, it’s no longer enough to focus selectively on specific problems such as the action of a drug on a particular symptom, or the local effectiveness of specific technologies; instead, scientists must consider the effect of a drug or medical procedure on the whole person, whilst new technological inventions shouldn’t be separated from their wider social and environmental ramifications.
In its proper place, however, reductionism in basic scientific research is important. (The recent infamous comment by American Republican Senate nominee Todd Akin — that women can “shut down” their bodies during a “legitimate rape”, in order not to become pregnant — illustrates the need for a basic understanding of how the various parts of the human body work.) I’m not sure if this kind of reductionism is a particularly male or particularly Western way of thinking, but either way there’s much more to the scientific method than this; it’s about developing testable hypotheses from observations (reductionist or holistic), and then testing those hypotheses in as objective a way as possible. The key thing in observing the world is curiosity, and this is a human trait, discernible in all children, regardless of race or gender. Of course, girls have traditionally faced more cultural restraints than boys, so perhaps we still need to encourage girls to be actively curious about the world around them. (For instance, it’s often suggested that women prefer biology to physics because they want to help people — and yet, many of the recent successes in medical and biological science would have been impossible without the technology provided by fundamental, curiosity-driven physics.)
Like Mary Somerville, I think the scientific method has universal appeal, but I also think feminist and other critics are right to question its patriarchal and capitalist origins. Although science at its best is value-free, it’s part of the broader community, whose values are absorbed by individual scientists. So much so that Yale researchers Moss-Racusin et al recently uncovered evidence that many scientists themselves, male and female, have an unconscious sexist bias. In their widely reported study, participants judged the same job application (for a lab manager position) to be less competent if it had a (randomly assigned) female name than if it had a male name.
In Mary Somerville’s day, such bias was overt, and it had the authority of science itself: women’s smaller brain size was considered sufficient to “prove” female intellectual inferiority. It was bad science, and it shows how patriarchal perceptions can skew the interpretation not just of women’s competence, but also of scientific data itself. (Without proper vigilance, this kind of subjectivity can slip through the safeguards of the scientific method because of other prejudices, too, such as racism, or even the agendas of funding bodies.) Of course, acknowledging the existence of patriarchal values in society isn’t about hating men or assuming men hate women. Mary Somerville met with “the utmost kindness” from individual scientific men, but that didn’t stop many of them from seeing her as the exception that proved the male-created rule of female inferiority. After all, it takes analysis and courage to step outside a long-accepted norm. And so, the “woman question” is still with us — but in trying to resolve it, we might not only find ways to remove existing gender biases, but also broaden the conversation about what sort of science we all want in the twenty-first century.
This book contains fascinating scientific research, brilliant historical detection, and inspiring tales of prior tsunami and earthquake survivors. It may not happen in our lifetime, but in geological terms, the rupture of this fault is imminent. Knowledge is power. Be ready. Books mentioned in this post Cascadia's Fault: The Coming... Jerry Thompson New Trade Paper [...]
Nearly three hundred years since his death, Isaac Newton is as much a myth as a man. The mythical Newton abounds in contradictions; he is a semi-divine genius and a mad alchemist, a somber and solitary thinker and a passionate religious heretic. Myths usually have an element of truth to them but how many Newtonian varieties are true? Here are ten of the most common, debunked or confirmed by the evidence of his own private papers, kept hidden for centuries and now freely available online.
10. Newton was a heretic who had to keep his religious beliefs secret.
True. While Newton regularly attended chapel, he abstained from taking holy orders at Trinity College. No official excuse survives, but numerous theological treatises he left make perfectly clear why he refused to become an ordained clergyman, as College fellows were normally obliged to do. Newton believed that the doctrine of the Trinity, in which the Father, the Son and the Holy Ghost were given equal status, was the result of centuries of corruption of the original Christian message and therefore false. Trinity College’s most famous fellow was, in fact, an anti-Trinitarian.
9. Newton never laughed.
False, but only just. There are only two specific instances that we know of when the great man laughed. One was when a friend to whom he had lent a volume of Euclid’s Elements asked what the point of it was, ‘upon which Sir Isaac was very merry.’ (The point being that if you have to ask what the point of Euclid is, you have already missed it.) So far, so moderately funny. The second time Newton laughed was during a conversation about his theory that comets inevitably crash into the stars around which they orbit. Newton noted that this applied not just to other stars but to the Sun as well and laughed while remarking to his interlocutor John Conduitt ‘that concerns us more.’
8. Newton was an alchemist.
True. Alchemical manuscripts make up roughly one tenth of the ten million words of private writing that Newton left on his death. This archive contains very few original treatises by Newton himself, but what does remain tells us in minute detail how he assessed the credibility of mysterious authors and their work. Most are copies of other people’s writings, along with recipes, a long alchemical index and laboratory notebooks. This material puzzled and disappointed many who encountered it, such as biographer David Brewster, who lamented ‘how a mind of such power, and so nobly occupied with the abstractions of geometry, and the study of the material world, could stoop to be even the copyist of the most contemptible alchemical work, the obvious production of a fool and a knave.’ While Brewster tried to sweep Newton’s alchemy under the rug, John Maynard Keynes made a splash when he wrote provocatively that Newton was the ‘last of the magicians’ rather than the ‘first king of reason.’
7. Newton believed that life on earth (and most likely on other planets in the universe) was sustained by dust and other vital particles from the tails of comets.
True. In Book 3 of the Principia, Newton wrote extensively how the rarefied vapour in comet’s tails was eventually drawn to earth by gravity, where it was required for the ‘conservation of the sea, and fluids of the planets’ and was most likely responsible for the ‘spirit’ which makes up the ‘most subtle and useful part of our air, and so much required to sustain the life of all things with us.’
6. Newton was a self-taught genius who made his pivotal discoveries in mathematics, physics and optics alone in his childhood home of Woolsthorpe while waiting out the plague years of 1665-7.
False, though this is a tricky one. One of the main treasures that scholars have sought in Newton’s papers is evidence for his scientific genius and for the method he used to make his discoveries. It is true that Newton’s intellectual achievement dwarfed that of his contemporaries. It is also true that as a 23 year-old, Newton made stunning progress on the calculus, and on his theories of gravity and light while on a plague-induced hiatus from his undergraduate studies at Trinity College. Evidence for these discoveries exists in notebooks which he saved for the rest of his life. However, notebooks kept at roughly the same time, both during his student days and his so called annus mirabilis, also demonstrate that Newton read and took careful notes on the work of leading mathematicians and natural philosophers, and that many of his signature discoveries owe much to them.
5. Newton found secret numerological codes in the Bible.
True. Like his fellow analysts of scripture, Newton believed there were important meanings attached to the numbers found there. In one theological treatise, Newton argues that the Pope is the anti-Christ based in part on the appearance in Scripture of the number of the name of the beast, 666. In another, he expounds on the meaning of the number 7, which figures prominently in the numbers of trumpets, vials and thunders found in Revelation.
4. Newton had terrible handwriting, like all geniuses.
False. Newton’s handwriting is usually clear and easy to read. It did change somewhat throughout his life. His youthful handwriting is slightly more angular, while in his old age, he wrote in a more open and rounded hand. More challenging than deciphering his handwriting is making sense of Newton’s heavily worked-over drafts, which are crowded with deletions and additions. He also left plenty of very neat drafts, especially of his work on church history and doctrine, which some considered to be suspiciously clean, evidence, said his 19th century cataloguers, of Newton’s having fallen in love with his own hand-writing.
3. Newton believed the earth was created in seven days.
True. Newton believed that the Earth was created in seven days, but he assumed that the duration of one revolution of the planet at the beginning of time was much slower than it is today.
2. Newton discovered universal gravitation after seeing an apple fall from a tree.
False, though Newton himself was partly responsible for this myth. Seeking to shore up his legacy at the end of his life, Newton told several people, including Voltaire and his friend William Stukeley, the story of how he had observed an apple falling from a tree while waiting out the plague in Woolsthorpe between 1665-7. (He never said it hit him on the head.) At that time Newton was struck by two key ideas—that apples fall straight to the center of the earth with no deviation and that the attractive power of the earth extends beyond the upper atmosphere. As important as they are, these insights were not sufficient to get Newton to universal gravitation. That final, stunning leap came some twenty years later, in 1685, after Edmund Halley asked Newton if he could calculate the forces responsible for an elliptical planetary orbit.
Subscribe to the OUPblog via email or RSS.
Subscribe to only physics and chemistry articles on the OUPblog via email or RSS.
Image credit: Portrait of Isaac Newton by Sir Godfrey Kneller. Public domain via Wikimedia Commons.
If a tree falls in the forest, and there’s nobody around to hear, does it make a sound?
For centuries philosophers have been teasing our intellects with such questions. Of course, the answer depends on how we choose to interpret the use of the word ‘sound’. If by sound we mean compressions and rarefactions in the air which result from the physical disturbances caused by the falling tree and which propagate through the air with audio frequencies, then we might not hesitate to answer in the affirmative.
Here the word ‘sound’ is used to describe a physical phenomenon – the wave disturbance. But sound is also a human experience, the result of physical signals delivered by human sense organs which are synthesized in the mind as a form of perception.
Now, to a large extent, we can interpret the actions of human sense organs in much the same way we interpret mechanical measuring devices. The human auditory apparatus simply translates one set of physical phenomena into another, leading eventually to stimulation of those parts of the brain cortex responsible for the perception of sound. It is here that the distinction comes. Everything to this point is explicable in terms of physics and chemistry, but the process by which we turn electrical signals in the brain into human perception and experience in the mind remains, at present, unfathomable.
Philosophers have long argued that sound, colour, taste, smell and touch are all secondary qualities which exist only in our minds. We have no basis for our common-sense assumption that these secondary qualities reflect or represent reality as it really is. So, if we interpret the word ‘sound’ to mean a human experience rather than a physical phenomenon, then when there is nobody around there is a sense in which the falling tree makes no sound at all.
This business about the distinction between ‘things-in-themselves’ and ‘things-as-they-appear’ has troubled philosophers for as long as the subject has existed, but what does it have to do with modern physics, specifically the story of quantum theory? In fact, such questions have dogged the theory almost from the moment of its inception in the 1920s. Ever since it was discovered that atomic and sub-atomic particles exhibit both localised, particle-like properties and delocalised, wave-like properties physicists have become ravelled in a debate about what we can and can’t know about the ‘true’ nature of physical reality.
Albert Einstein once famously declared that God does not play dice. In essence, a quantum particle such as an electron may be described in terms of a delocalized ‘wavefunction’, with probabilities for appearing ‘here’ or ‘there’. When we look to see where the electron actually is, the wavefunction is said to ‘collapse’ instantaneously, and appears ‘here’ with a frequency consistent with the probability predicted by quantum theory. But there is no predicting precisely where an individual electron will be found. Chance is inherent in the collapse of the wavefunction, and it was this feature of quantum theory that got Einstein so upset. To make matters worse, if the collapse is instantaneous then this implies what Einstein called a ‘spooky action-at-a-distance’ which, he argued, appeared to violate a key postulate of his own special theory of relativity.
So what evidence do we have for this mysterious collapse of the wavefunction? Well, none actually. We postulate the collapse in an attempt to explain how a quantum system with many different possible outcomes before measurement transforms into a system with one and only one result after measurement. To Irish physicist John Bell this seemed to be at best a confidence-trick, at worst a fraud. ‘A theory founded in this way on arguments of manifestly approximate character,’ he wrote some years later, ‘howe
Émilie du Châtelet, wrote Voltaire, ‘was a great man whose only fault was being a woman.’ Du Châtelet has paid the penalty for being a woman twice over. During her life, she was denied the educational opportunities and freedom that she craved. ‘Judge me for my own merits,’ she protested: ‘do not look upon me as a mere appendage to this great general or that renowned scholar’ – but since her death, she has been demoted to subsidiary status as Voltaire’s mistress and Isaac Newton’s translator.
Too often moulded into hackneyed stereotypes – the learned eccentric, the flamboyant lover, the devoted mother – du Châtelet deserves more realistic appraisals as a talented yet fallible woman trapped between overt discrimination and inner doubts about her worth. ‘I am in my own right a whole person,’ she insisted. I hope she would appreciate how I see her …
Émilie du Châtelet (1706-49) was tall and beautiful. Many intellectual women would object to an account starting with their looks, but du Châtelet took great care with her appearance. She spent a fortune on clothes and jewellery, acquiring the money from her husband, a succession of lovers, and her own skills at the gambling table (being mathematically gifted can bring unexpected rewards.) She brought the same intensity to her scientific work, plunging her hands in ice-cold water to keep herself awake as she wrote through the night. This whole-hearted enthusiasm for every activity she undertook explains why I admire her so much. The major goal of life, she believed, was to be happy – and for her that meant indulging but also balancing her passions for food, sex and learning.
Born into a wealthy family, du Châtelet benefited from an enlightened father who left her free to browse in his library and hired tutors to give her lessons more appropriate for boys than for marriageable girls. By the time she was twelve, du Châtelet could speak six languages, but it was not until her late twenties that she started to immerse herself in mathematics and Newtonian philosophy. By then, she was married to an elderly army officer, had two surviving children, and was developing intimate friendships with several clever young men who helped her acquire the education she was not allowed to gain at university.
When Voltaire’s radical politics provoked a warrant for his arrest, she concealed him in her husband’s run-down estate at Cirey and returned to Paris to restore his reputation. Over the next year, she oscillated between rural seclusion with Voltaire and partying in Paris, but after some prompting, she eventually made her choice and stuck to it. For fifteen years, they lived together at Cirey, happily embroiled in a private world of intense intellectual endeavour laced with romance, living in separate apartments linked by a secret passage and visited from time to time by her accommodating husband.
For decades, French scholars had been reluctant to abandon the ideas of their own national hero, René Descartes, and instead adopt those of his English rival, Newton. They are said to have been converted by a small book that appeared in 1738: Elements of Newtonian Philosophy. The only name on the title-page is Voltaire’s, but it is clear that this was a collaborative venture in which du Châtelet played a major role: as Voltaire to
Hi, all. If you’re in the mood for some short fiction, you can now read two of my short stories, written last year while I was deep into my quantum physics research for my upcoming trilogy, INTO THE PARALLEL. You can tell my brain was pretty physicsy at the time.
A SKIP OF THE MIND: A physicist must find a unique solution to the problem of time travel if he wants to save his wife.
GAMEMASTER: They say high school is a game . . . For one girl, it’s a game she’s in charge of. A stroke of a key, an equation, a few changes in molecules and atoms here and there, and suddenly the losers aren’t such outcasts anymore. Nicki isn’t doing it to be noble, she’s doing it for sport. Because she can. But what happens to the people she’s remade? Who’s in charge of them now?
The realm of theoretical physics is teeming with abstract and beautiful concepts, and the process of bringing them into existence, and then explaining them, demands profound creativity according to Giovanni Vignale, author of The Beautiful Invisible: Creativity, imagination, and theoretical physics. In the excerpt below Vignale discusses the beginnings of theoretical physics and the abstract.
Physics, most of us would agree, is the basic science of nature. Its purpose is to discover the laws of the natural world. Do such laws exist? Well, the success of physics at identifying some of them proves, in retrospect, that they do exist. Or, at least, it proves that there are Laws of Physics, which we can safely assume to be Laws of Nature.
Granted, it may be difficult to discern this lofty purpose when all one hears in an introductory course is about flying projectiles and swinging pendulums, strings under tension and beams in equilibrium. But at the beginning of the enterprise there were some truly fundamental questions such as: the nature of matter, the character of the forces that bind it together, the origin of order, the fate of the universe. For centuries humankind had been puzzling over these questions, coming up with metaphysical and fantastic answers. And it stumbled, and it stumbled, until one day—and here I quote the great Austrian writer and ironist, Robert Musil:
. . . it did what every sensible child does after trying to walk too soon; it sat down on the ground, contacting the earth with a most dependable if not very noble part of its anatomy, in short, that part on which one sits. The amazing thing is that the earth showed itself uncommonly receptive, and ever since that moment of contact has allowed men to entice inventions, conveniences, and discoveries out of it in quantities bordering on the miraculous.
This was the beginning of physics and, actually, of all science: an orgy of matter-of-factness after centuries of theology. Careful and systematic observation of reality, coupled with quantitative analysis of data and an egregious indifference to theories that could not be tested by experiment became the hallmark of every serious investigation into the nature of things.
But even as they were busy observing and experimenting, the pioneers of physics did not fail to notice a peculiar feature of their discipline. Namely, they realized that the laws of nature were best expressed in an abstract mathematical language—a language of triangles and circles and limits—which, at first sight, stood almost at odds with the touted matter-of-factness of experimental science. As time went by, it became clear that mathematics was much more than a computational tool: it had a life of its own. Things could be discovered by mathematics. John Adams and, independently, Urbain Le Ferrier, using Newton’s theory of gravity, computed the orbit of Uranus and found that it deviated from the observed one. Rather than giving up, they did another calculation showing that the orbit of Uranus could be explained if there were another planet pulling on Uranus according to Newton’s law of gravity. Such a planet had never been seen, but Adams and Le Ferrier told the astronomers where to look for it. And, lo and behold, the planet—Neptune—was there, waiting to be discovered. That was in 1846.
Even this great achievement pales in comparison with things that happened later. In the 1860s, James Clerk Maxwell trusted mathematics—and not just the results of a calculation, but the abstract structure of a set of equations—to predict the existence of electromagnetic waves. And electromagnetic waves (of which visible light is an example) were controllably produced in the lab shortly afterwards.
*** One They said it couldn’t be done. Well, that’s not exactly true. They said it couldn’t be done by a 17-year-old girl sitting alone in her bedroom on a Saturday morning. Well, that’s not exactly true, either, since it’s not like there’s some physicist out there who specifically made that prediction—“A seventeen-year-old girl in [...]
By Frank Close
To readers of Neutrino, rest assured: there is no need yet for a rewrite based on news that neutrinos might travel faster than light. I have already advertised my caution in The Observer, and a month later nothing has changed. If anything, concerns about the result have increased.
Last night, I was feeling philosophical, but had nothing to ponder on. I looked at my PC and was struck with the sheer lack of technical expertise I had in electronics, rendering any opinion I had on the intricacies of the device around 10 years out of date. I started looking on Wikipedia, before feeling pretty belittled by my lack of knowledge of space physics. Then, inspiration hit me. Actually, it hit the window.
“Bzzzz. Whack. Bzzzzzz. Whack. Bzzzzz. Whack”. I looked around to try and locate the origin of such debacle, then I saw it. A fly was buzzing around, hitting off the window repeatedly.
I found myself at a fork in the path of destiny in my life. It would have been easy to ignore the offending beast and go back to my scholarship on the theorised negative pressures exhibited by dark matter. But no, unfortunately I chose the other path open to me: I concentrated on the fly.
Bzzzzzz. Whack. Bzzzzzz. Whack.
On watching the poor creature, I couldn’t help but admire its boundless stoicism and determination. However, the net feeling in my mind was not one of reckless pity. It was more a feeling of disappointment. I wasn’t disappointed in the fly: how could a creature that is probably less than a day old really understand the enormity of its stupidity? Rather, I was disappointed with evolution. I had really hoped that over two billion years of cumulative learning and development, the animal kingdom would have overcome such a barrier.
Evolution is always cited as such a wonderfully intelligent thing. Even as I write this, I can hear Richard Attenborough saying, “Look how the tree has learned to lean towards the sunlight.”. Of course there is far more to be said for evolution and its wonderful creations. But 2 billion years? I had really hoped for more.
I was starting to get quite upset. If I was locked in a room the size of earth for 2 billion years, I would have expected to design a fly that could learn from its mistakes.
The window was dirty, I noticed with increasing desperation. Surely the resultant deviation from transparency would register with the fly?
This was the last straw. I had to do it, I had to be the vector for natural selection. From that moment on, any offspring of the fly would spend their entire adult lives whacking into inanimate objects without the brainpower to overcome such a simple problem.
Bzzzzzz. Whack. I knew then, if I didn’t do it, the animal kingdom would be doomed. I reached for a newspaper. I rolled it up in my hands. The future of the world was in my hands.
Bzzzzzz. Whack. This only steeled my resolve. I made my move. The fly, with almost pre-cognitive reflexes, dodged to the side and flew away. Knowing that my newspaper probably created a pressure wave that aided the fly in its escape, I poked holes in my holy smiting tool, ready to continue in my role as God of evolution.
Bzzzzzzz. Whack. The fly was back. I leapt at it.
The ensuing struggle was too horrible to even describe. The bloodshed? Non-existent. The perspiration on my brow? Fairly prominent. The fly? Still alive. My curtains? In a heap on the floor. My desktop belongings? Scattered. My glass of coke? Spilt.
The fly had won. I slumped in a heap of desolation on the floor. I couldn’t help but wonder if the attempts to get through the window were simply an ingenious experiment to prove quantum theory (if you hit an object enough times you will go through it) or if the fly’s erogenous zones were on its forehead. Either way, the little bugger had won.
Bzzzzzzz. Whack. Back to trying to decipher space physics, I suppose.
The Plot: "I'm dead." There is much she doesn't remember, not even her name. But she knows that once she was alive, with a body, and now she is dead. Objects are floating....keys. Pine cone. Bracelet. Sweatshirt. Touch the sweatshirt, and suddenly she is a place, a time, a when, a where, and finally, a name. Maddy. Madison Stanton. 17. She's dead. But why?
The Good: Each object, bracelet, keys, sweatshirt, is something that, when alive, Maddy lost. Touching the object brings Maddy back to that time, that moment, and she can relive that memory again and again and again. If, in that captured moment, alive-Maddy finds the object, the door is shut and that memory cannot be revisited.
So a ghost story. A dead girl revisiting her life story.
Maddy, revisiting a physics class: "something can be two things at once, and that observing them influences which of the two they are... Ms. Winters has moved to talking about how everything in the universe is connected in ways that can't always be seen or understood. ...at the subatomic level no time has to pass for one particle to know about and be affected by what's happening to another." Maddy's head is about to explode, and so is mine, but what Huntley has done is taken the fantastical (the afterlife, ghosts, Heaven) and wrapped it in science.
Touch an object, visit that time, and so alive-Maddy and dead-Maddy are there, both at the same time. At some point Maddy realizes she can influence the past, her life; an object may be found, a bit of reality shifted. But no matter what little difference she makes, which gives her a feeling of disquiet as she erases one memory and creates another, the end remains the same. She is Madison Stanton. She never visits a time later than age seventeen. And the way this works and intertwines, changes, being and observing -- is all explained by physics.
Madison's journey through her life is not offered in a linear fashion; she jumps in time, back and forth, and we get a scattered feel for her life and family. She is in love with Gabe, happy to be wearing his sweatshirt; then she is meeting him at her sister's wedding. Madison plays with her friend Sandra, then she is six and in Disney World, then she is eleven. She is enemies with Tammy, then friends, then the slumber party that ended their friendship. Slowly, for both Madison and the reader, the puzzle of her life, her death, her afterlife is revealed.
Huntley offers a few possibilities as to why, and how, Maddy died. While not a classic whodunit mystery, there is suspense, and Maddy is trying to find out why she lost that which is most important to us all. Life.
Inventive story telling, beautiful language, a book that gets better on rereading, a narrator whose death you mourn and dread even though you know its unavoidable; it's easy to see why this is on the Morris Award shortlist.
A pipe or tube of glass, metal or other material, bent so that one leg is longer than the other, and used for drawing off liquids by means of atmospheric pressure, which forces the liquid up the shorter leg and over the bend in the pipe.
Margot Charlton of the OED’s staff explained, “The OED entry for siphon dates from 1911 and was written by editors who were not scientists.” She was surprised that nobody had queried the definition in those 99 years. The definition of siphon will be corrected in the next edition of the OED.
Atmospheric pressure is involved to start the process of moving the liquid up the shorter leg of the siphon. However, once the fluid is over the bend in the tube, it is gravity, the weight of the liquid, that pulls the it down the longer leg.
Dr. Hughes reported, “An extensive check of online and offline dictionaries did not reveal a single dictionary that correctly referred to gravity being the operative force in a siphon.” I guess he did not check Merriam-Webster’s 11th Collegiate Dictionary, which provides the following definition:
1 a : a tube bent to form two legs of unequal length by which a liquid can be transferred to a lower level over an intermediate elevation by the pressure of the atmosphere in forcing the liquid up the shorter branch of the tube immersed in it while the excess of weight of the liquid in the longer branch when once filled causes a continuous flow
Amsco has editors who are scientists, but we are human and sometimes make a mistake. Like the OED, once we become aware of it, we correct it in the next reprint.
I got the idea for this post from my son, Don, who sent me a link to an article in The Register, an Information Technology journal from the United Kingdom. I enjoyed the article so much that I subscribed. On Tuesday, I saw the following headline in the science section of The Register: “Siphon Wars: Pressurist Weighs into Gravitite Boffin. This could be trouble, I thought, and it was. Rather than try to paraphrase (see tomorrow’s post by Lauren), I decided to quote from The Register.
0 Comments on Oops--Siphon Definition Defied Law of Gravity as of 1/1/1900
As I age, I am beginning to be more aware of the importance of protecting myself from the sun. I wear sunscreen, even in the winter. However I am not really good about putting on my sunglasses. I do have an anti-UV coating on my bifocals, which I hope is helpful, and when I remember them, I have stylish clip-on polarized sunglasses. I recently read an article that got me thinking more about sunglasses and ultraviolet radiation (UV).
As you can see from the top bar in the diagram above, there are different forms of light, ultraviolet and visible. Human eyes detect visible light, which divides into blue, green, and red, as shown on the bottom bar. (We detect infrared as heat.) Ultraviolet light is part of the spectrum of light; its wavelength is shorter than that of visible light. Some animals, such as bees and some birds can see in ultraviolet light, and many flowers and birds have patterns that are only visible in ultraviolet light. Humans cannot see ultraviolet light. However, parts of the eye such as the cornea, the lens, and the retina can be damaged when they absorb too much UV light. Some scientists think that exposure to UV light may cause cataracts, a clouding of the lens of the eye.
All UV light is more energetic than visible light, which is why it causes damage. The shorter the wavelength of light, the more energetic it is. The more energetic the light is, the more damage it can cause. As you can see from the bottom part of the diagram, there are several types of ultraviolet light. Because our atmosphere protects us from the most of the other forms of ultraviolet radiation, we should be most concerned with UVA and UVB. UVA has a longer wavelength than UVB. In addition to damaging our eyes, UVA and UVB cause sunburn and skin cancer.
With this in mind, I went to my ophthalmologist to have my eyes checked. He gave me a new prescription for lenses. I took my prescription to a local eyeglass shop (Wize Eyes), where the friendly saleswoman helped me pick out stylish new frames. Knowing that I tend to forget to put on my sunglasses, I chose lenses that darken in sunlight and protect my eyes from UV light. This picture was taken in a cloudy evening, so the lenses do not look very dark.
There is a Jane Austen-esque phrase in my book: “it is a ceaseless wonder that our universal and objective science comes out of human – sometimes all too human – enquiry”. Physics is rather hard to blog, so I’ll write instead about the practitioners of science – what are they like? Are there certain personality types that do science? Does the science from different countries end up being different?
Without question there are fewer women physicists than men physicists and, also without question, this is a result of both nature and nurture. Does it really matter how much of the ‘blame’ should be apportioned to nature and how much to nurture? Societies have evolved the way they have for a reason, and they have evolved to have less women pursuing science than men (at present). Perhaps ‘intelligence’ has even been defined in terms of what men are good at?
Do a disproportionate number of physicists suffer from Asperger Syndrome (AS)? I deplore the fashion for retrospectively diagnosing the most famous physicists, such as Newton and Einstein, as suffering in this way. However, I’ll jump on the bandwagon and offer my own diagnosis: these two had a different ‘syndrome’ – they were geniuses, period. Contrary to common supposition, it would not be an asset for a scientist to have AS. Being single-minded and having an eye for detail – good, but having a narrow focus of interest and missing too much of the rich tapestry of social and worldly interactions – not good, and less likely to lead to great heights of creativity.
In the late 18th and early 19th centuries, the science of energy was concentrated in two nations, England and France. The respective scientists had different characteristics. In England (strictly, Britain) the scientists were made up from an undue number of lone eccentrics, such as the rich Gentleman-scientists, carrying out researches in their own, privately–funded laboratories (e.g. Brook Taylor, Erasmus Darwin, Henry Cavendish and James Joule) and also religious non-conformists, of average or modest financial means (e.g. Newton, Dalton, Priestley and Faraday). This contrasts with France, where, post-revolution, the scientist was a salaried professional and worked on applied problems in the new state institutions (e.g. the French Institute and the École Polytechnique). The quality and number of names concentrated into one short period and one place (Paris), particularly in applied mathematics, has never been equalled: Lagrange, Laplace, Legendre, Lavoisier and Lamarck, – and these are only the L’s. As the historian of science, Henry Guerlac, remarked, science wasn’t merely a product of the French Revolution, it was the chief cultural expression of it.
There was another difference between the English and French scientists, as sloganized by the science historian Charles Gillispie: “the French…formulate things, and the English do them.” For example, Lavoisier developed a system of chemistry, including a new nomenclature, while James Watt designed and built the steam engine.
From the mid-19th century onwards German science took a more leading role and especially noteworthy was the rise of new universities and technical institutes. While many German scientists had religious affiliations (for example Clausius was a Lutheran), their science was neutral with regards to religion, and this was different to the trend in Britain. For example, Thomson (later Lord Kelvin) talked of the Earth “waxing old” and other quotes from the Bible, and, although he was not explicit, appears to have had religious objections to Darwin’s Theory of Evolution (at any rate, he wanted his ‘age of the Earth calculations’ to contradict Darwin’s Theory).
Whereas personal, cultural, social, economic and political factors will undoubtedly influence the course of science, the ultimate laws must be free of all such associations. Presumably the laws of Thermodynamics would still
Energy is the go of things, the driver of engines, devices and all physical processes. It can come in various forms (electrical, chemical, rest mass, curvature of spacetime, light, heat and so on) and change between these forms, but the total is always conserved. Newton missed energy and it was Leibniz who discovered kinetic energy (he called it vis viva). The idea was promoted on the continent, chiefly by one family, the Swiss family of feuding mathematicians, the Bernoullis, in the first half of the 18th century. The more subtle concept, potential energy, slipped in over a hundred years, uninvited, like the 13th fairy at the party.
In Feynman’s profound allegory (‘Dennis the Menace’ playing with blocks), energy is defined by its property of being conserved. But, this doesn’t answer to all our intuitions about energy. Why does it change smoothly between its various forms? For example, when a child swings on a swing, her kinetic energy decreases as the swing climbs (and gains gravitational potential energy) and then, as the swing descends, she goes faster and faster.
A different approach holds the answer. Consider the walk to the shops. You could take the shortest route or you could optimize other aspects, e.g. take a longer route but less hilly, or more shady or with the least number of road-crossings. Nature also works in this optimizing way: it tries to minimize the total ‘action’ between a starting place and a final destination. ‘Action’ is defined as ‘energy’ times ‘time’, and, in order to minimize action, the energy must be able to change in a prescribed way, smoothly and continuously, between its two forms, kinetic and potential energy, (The Principle of Least Action was discovered by an eccentric Frenchman, Pierre-Louis Moreau de Maupertuis, while head of the Berlin Academy of Science, in the mid 18th century.)
What are kinetic and potential energy? Kinetic energy is the energy of motion of an individual body whereas potential energy is the energy of interaction of parts within a system. Potential energy must be specified for each new scenario, but kinetic energy comes in one essential form and is more fundamental in this sense. However, as potential energy relates to internal aspects (of a system), it doesn’t usually change for differently moving ‘observers’. For example, the game of billiards in the lounge of the ocean liner continues unaffected, whether that liner is coasting smoothly at 30 kph or whether it’s moored to a buoy. The kinetic energy of the liner is vastly different in the two cases.
But sometimes potential energy and even mass do change from one ‘reference frame’ to another. The more fundamental quantity is the ‘least action’, as this stays the same, whatever the (valid) ‘observer’.
Heat energy is the sum of the individual microscopic kinetic energies. But the heat energy and the kinetic energy of an everyday object are very different (e.g. the kinetic energy of a kicked football and the heat energy of a football left to warm in the sun). In fact, for the early 19th century natural philosophers, considering heat as a form of energy was like committing a category error. The slow bridging of this error by people like Daniel Bernoulli, Count Rumford, Robert Julius Mayer and James Joule makes a very interesting tale.
With regards to the looming energy crisis and global warming, here are the things we must remember:
1. Nature always counts the true cost, even if we don’t
2. There is no such thing as safe energy – it is energetic, after all
3. As the sink of all our activities becomes warmer, so all our ‘engines’, cars and humans etc, will run less efficiently
4. We must consider not only energy but also ‘least action’ – and take action.
Ray Davis was the first person to look into the heart of a star. He did so by capturing neutrinos, ghostly particles that are produced in the centre of the Sun and stream out across space. As you read this, billions of them are hurtling through your eyeballs at almost the speed of light, unseen.
Neutrinos are as near to nothing as anything we know, and so elusive that they are almost invisible. When Davis began looking for solar neutrinos in 1960, many thought that he was attempting the impossible. It nearly turned out to be: 40 years would pass before he was proved right, leading to his Nobel Prize for physics in 2002, aged 87.
In June 2006, I was invited by The Guardian newspaper to write his obituary. An obituary necessarily focuses on the one person, but the saga of the solar neutrinos touched the lives of several others, scientists who devoted their entire careers chasing the elusive quarry, only to miss out on the Nobel Prize by virtue of irony, chance, or, tragically, by having already died.
Of them all, the most tragic perhaps is the genius Bruno Pontecorvo.
Pontecorvo was a remarkable scientist and a communist, working at Harwell after the war. When his Harwell colleague Klaus Fuchs was exposed as an atom spy in 1950, Pontecorvo immediately fled to the USSR. This single act probably killed his chances of Nobel Prizes.
In the following years, Pontecorvo developed a number of ideas that could have won him one or more Nobels. But his papers were published in Russian, and were unknown in the West until their English translations appeared up to two years later. By this time others in the USA had come up with the same ideas, later winning the Nobel Prize themselves.
Amongst his ideas, one involved an experiment which Soviet facilities could not perform. But most ironic were Pontecorvo’s insights about neutrinos.
Ray Davis had detected solar neutrinos – but not enough of them. For years, many of us involved in this area of research thought Davis’ experiment must have been at fault. But Pontecorvo had another theory which indicated that like chameleons, neutrinos changed their form en route across space from the Sun to Earth. And he was right. It took many years to prove it, but by 2000 the whole saga was completed. Davis duly won his Nobel Prize, but so many years had elapsed that Pontecorvo by then was dead.
So although my piece for The Guardian began as the life story of Ray Davis, Pontecorvo was there behind the scenes to such an extent that it became his story also. It is also the story of John Bahcall, Davis’ lifelong collaborator, who, to the surprise of many, was not included in the Nobel award.
The lives of these three great scientists were testimony to what science is all about: as Edison put it, genius is 1% inspiration and 99% perspiration.
A final sobering thought to put our human endeavors in context: those neutrinos that passed through you when you started reading this article are by now well on their way to Mars.