What is JacketFlap

  • JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans.
    Join now (it's free).

Sort Blog Posts

Sort Posts by:

  • in
    from   

Suggest a Blog

Enter a Blog's Feed URL below and click Submit:

Most Commented Posts

In the past 7 days

Recent Comments

JacketFlap Sponsors

Spread the word about books.
Put this Widget on your blog!
  • Powered by JacketFlap.com

Are you a book Publisher?
Learn about Widgets now!

Advertise on JacketFlap

MyJacketFlap Blogs

  • Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.

Blog Posts by Date

Click days in this calendar to see posts by day or month
<<November 2014>>
SuMoTuWeThFrSa
      01
02030405060708
09101112131415
16171819202122
23242526272829
30      
new posts in all blogs
Viewing: Blog Posts Tagged with: Physics &, Most Recent at Top [Help]
Results 1 - 25 of 51
1. Patterns in physics

The aim of physics is to understand the world we live in. Given its myriad of objects and phenomena, understanding means to see connections and relations between what may seem unrelated and very different. Thus, a falling apple and the Moon in its orbit around the Earth. In this way, many things “fall into place” in terms of a few basic ideas, principles (laws of physics) and patterns.

As with many an intellectual activity, recognizing patterns and analogies, and metaphorical thinking are essential also in physics. James Clerk Maxwell, one of the greatest physicists, put it thus: “In a pun, two truths lie hid under one expression. In an analogy, one truth is discovered under two expressions.”

Indeed, physics employs many metaphors, from a pendulum’s swing and a coin’s two-sidedness, examples already familiar in everyday language, to some new to itself. Even the familiar ones acquire additional richness through the many physical systems to which they are applied. In this, physics uses the language of mathematics, itself a study of patterns, but with a rigor and logic not present in everyday languages and a universality that stretches across lands and peoples.

Rigor is essential because analogies can also mislead, be false or fruitless. In physics, there is an essential tension between the analogies and patterns we draw, which we must, and subjecting them to rigorous tests. The rigor of mathematics is invaluable but, more importantly, we must look to Nature as the final arbiter of truth. Our conclusions need to fit observation and experiment. Physics is ultimately an experimental subject.

Physics is not just mathematics, leave alone as some would have it, that the natural world itself is nothing but mathematics. Indeed, five centuries of physics are replete with instances of the same mathematics describing a variety of different physical phenomena. Electromagnetic and sound waves share much in common but are not the same thing, indeed are fundamentally different in many respects. Nor are quantum wave solutions of the Schroedinger equation the same even if both involve the same Laplacian operator.

maths
Advanced Theoretical Physics by Marvin (PA). CC-BY-NC-2.0 via mscolly Flickr.

Along with seeing connections between seemingly different phenomena, physics sees the same thing from different points of view. Already true in classical physics, quantum physics made it even more so. For Newton, or in the later Lagrangian and Hamiltonian formulations that physicists use, positions and velocities (or momenta) of the particles involved are given at some initial instant and the aim of physics is to describe the state at a later instant. But, with quantum physics (the uncertainty principle) forbidding simultaneous specification of position and momentum, the very meaning of the state of a physical system had to change. A choice has to be made to describe the state either in terms of positions or momenta.

Physicists use the word “representation” to describe these alternatives that are like languages in everyday parlance. Just as with languages, where one needs some language (with all equivalent) not only to communicate with others but even in one’s own thinking, so also in physics. One can use the “position representation” or the “momentum representation” (or even some other), each capable of giving a complete description of the physical system. The underlying reality itself, and most physicists believe that there is one, lies in none of these representations, indeed residing in a complex space in the mathematical sense of complex versus real numbers. The state of a system in quantum physics is in such a complex “wave function”, which can be thought of either in position or momentum space.

Either way, the wave function is not directly accessible to us. We have no wave function meters. Since, by definition, anything that is observed by our experimental apparatus and readings on real dials, is real, these outcomes access the underlying reality in what we call the “classical limit”. In particular, the step into real quantities involves a squared modulus of the complex wave functions, many of the phases of these complex functions getting averaged (blurred) out. Many so-called mysteries of quantum physics can be laid at this door. It is as if a literary text in its ur-language is inaccessible, available to us only in one or another translation.

orbit
In Orbit by Dave Campbell. CC-BY-NC-ND-2.0 via limowreck666 Flickr.

What we understand by a particle such as an electron, defined as a certain lump of mass, charge, and spin angular momentum and recognized as such by our electron detectors is not how it is for the underlying reality. Our best current understanding in terms of quantum field theory is that there is a complex electron field (as there is for a proton or any other entity), a unit of its excitation realized as an electron in the detector. The field itself exists over all space and time, these being “mere” markers or parameters for describing the field function and not locations where the electron is at an instant as had been understood ever since Newton.

Along with the electron, nearly all the elementary particles that make up our Universe manifest as particles in the classical limit. Only two, electrically neutral, zero mass bosons (a term used for particles with integer values of spin angular momentum in terms of the fundamental quantum called Planck’s constant) that describe electromagnetism and gravitation are realized as classical electric and magnetic or gravitational fields. The very words particle and wave, as with position and momentum, are meaningful only in the classical limit. The underlying reality itself is indifferent to them even though, as with languages, we have to grasp it in terms of one or the other representation and in this classical limit.

The history of physics may be seen as progressively separating what are incidental markers or parameters used for keeping track through various representations from what is essential to the physics itself. Some of this is immediate; others require more sophisticated understanding that may seem at odds with (classical) common sense and experience. As long as that is kept clearly in mind, many mysteries and paradoxes are dispelled, seen as artifacts of our pushing our models and language too far and “identifying” them with the underlying reality, one in principle out of reach. We hope our models and pictures get progressively better, approaching that underlying reality as an asymptote, but they will never become one with it.

Headline Image credit: Milky Way Rising over Hilo by Bill Shupp. CC-BY-2.0 via shupp Flickr

The post Patterns in physics appeared first on OUPblog.

0 Comments on Patterns in physics as of 11/13/2014 5:25:00 AM
Add a Comment
2. Blue LED lighting and the Nobel Prize for Physics

When I wrote Materials: A Very Short Introduction (published later this month) I made a list of all the Nobel Prizes that had been awarded for work on materials. There are lots. The first was the 1905 Chemistry prize to Alfred von Baeyer for dyestuffs (think indigo and denim). Now we can add another, as the 2014 Physics prize has been awarded to the three Japanese scientists who discovered how to make blue light-emitting diodes. Blue LEDs are important because they make possible white LEDs. This is the big winner. White LED lighting is sweeping the world, and that’s something whose value we can all easily understand. (Well done to the Nobel Foundation, by the way: this year the Physics and Medicine prizes are both about things we can all get the hang of.)

Red and green LEDs have been around for a long time, but making a blue one was a nightmare, or at least a very long journey. It was the sustained target of industrial and academic research for more than twenty years. (Baeyer’s indigo by the way was a similar case. In the late nineteenth century, making an industrial indigo dye was everyone’s top priority, but the synthesis proved elusive.) What Akasaki, Amano, and Nakamura did was to work with a new semiconductor material, gallium nitride GaN, and find ways to build it into a tiny club sandwich. Layered heterostructures like this are at the heart of many semiconductor devices — there was a Nobel Prize for them in 2000. So it is not so much the concept of the blue LED that the new Nobel Prize recognizes as inventing methods to make efficient, reliable devices from GaN materials. In this Akasaki, Amano, and Nakamura succeeded where many others had failed.

The commercial blue LED is formed by two crystalline layers of GaN between which is sandwiched a layer of GaN mixed with closely related semiconductor indium nitride InN. The InGaN layer is only a few atoms thick: in the business it is called a quantum well. Finding how to grow these exquisitely precise layers (generally depositing atoms from a vapor on a smooth sapphire surface) took many years.

The quantum well is where the action occurs. When a current flows through the device, negative electrons and positive holes are briefly trapped in the quantum well. When they combine, there is a little pop of energy, which appears as a photon of blue light. The efficiency of the device depends on getting as many of the electron-hole pairs as possible to produce photons, and to prevent the electrical energy from leaking off into other processes and ending up as heat. The blue LED achieves conversion efficiencies of more than 50%, an extraordinary improvement on traditional lighting technology.

An LED Solar Lamp, Rizal Park, Philippines “Solar Lamp Luneta” by SeamanWell. CC-BY-SA-3.0 via Wikimedia Commons.

How does this help us to get white light? Well, one route is to combine the light from blue, red, and green LEDs, and with a nod to Isaac Newton the result is white light. But most commercial white LEDs don’t work that way. They contain only a blue LED, and are constructed so that the blue light shines through a thin coating of a material called a phosphor. The phosphor (commonly a yttrium garnet doped with cerium) converts some of the blue light to longer wavelength yellow light. The combination of yellow and blue light appears white.

Perhaps we should pay more attention to how amazing little devices such as these are made. And how they are packaged, and sold for next to nothing as components for everyday consumer products. Low cost and availability are important. It is easy to see that making a white-light LED which can produce say 200 lumens of light for every watt of electrical energy it uses is a big step in reducing energy consumption in lighting homes, offices, industries, in street lighting, in vehicles, and so on. They replace the old incandescent lamp which produced perhaps 15 lumens per watt. Since 20% of our electricity is used for lighting, a practical white LED lamp is transformative.

But the white LED has another benefit, in bringing useful light to communities all over the world that do not have a public electricity supply. One day, I took to pieces a little solar lamp, which sells for a few dollars. I wanted to see exactly what was in it, and in particular how many chemical elements I could find. When I totted them up I had found more than twenty, about a quarter of all the elements in the Periodic Table. This little lamp has a small solar panel, a lithium battery and at its heart a white LED. It brings white light to people who previously had only dangerous kerosene lamps, or perhaps nothing at all. And it provides a solar-powered charger for a phone too. Four of the more exotic elements in this lamp are in the LED light, indium and gallium in the LED heterostructure, and yttrium and cerium in the phosphor. Is this solar lamp really the simple product that it seems? Or is it, like thousands of other everyday articles, a miracle of material ingenuity?

Featured image: Blue light emitting diodes over a proto-board by Gussisaurio. CC-BY-SA-3.0 via Wikimedia Commons.

The post Blue LED lighting and the Nobel Prize for Physics appeared first on OUPblog.

0 Comments on Blue LED lighting and the Nobel Prize for Physics as of 10/9/2014 7:37:00 PM
Add a Comment
3. Are we alone in the Universe?

World Space Week has prompted myself and colleagues at the Open University to discuss the question: ‘Is there life beyond Earth?’

The bottom line is that we are now certain that there are many places in our Solar System and around other stars where simple microbial life could exist, of kinds that we know from various settings, both mundane and exotic, on Earth. What we don’t know is whether any life does exist in any of those places. Until we find another example, life on Earth could be just an extremely rare fluke. It could be the only life in the whole Universe. That would be a very sobering thought.

At the other extreme, it could be that life pops up pretty much everywhere that it can, so there should be microbes everywhere. If that is the case, then surely evolutionary pressures would often lead towards multicellular life and then to intelligent life. But if that is correct – then where is everybody? Why can’t we recognise the signs of great works of astroengineering by more ancient and advanced aliens? Why can’t we pick up their signals?

The chemicals from which life can be made are available all over the place. Comets, for example, contain a wide variety of organic molecules. They aren’t likely places to find life, but collisions of comets onto planets and their moons should certainly have seeded all the habitable places with the materials from which life could start.

So where might we find life in our Solar System? Most people think of Mars, and it is certainly well worth looking there. The trouble is that lumps of rock knocked off Mars by asteroid impacts have been found on Earth. It won’t have been one-way traffic. Asteroid impacts on Earth must have showered some bits of Earth-rock onto Mars. Microbes inside a rock could survive a journey in space, and so if we do find life on Mars it will be important to establish whether or not it is related to Earth-life. Only if we find evidence of an independent genesis of life on another body in our Solar System will we be able to conclude that the probability of life starting, given the right conditions, is high.

A colour image of comet 67/P from Rosetta’s OSIRIS camera. Part of the ‘body’ of the comet is in the foreground. The ‘head’ is in the background, and the landing site where the Philae lander will arrive on 12 November 2014 is out of view on the far side of the ‘head’. (Patrik Tschudin, CC-BY-2.0 via Flickr)

For my money, Mars is not the most likely place to find life anyway. The surface environment is very harsh. The best we might hope for is some slowly-metabolising rock-eating microbes inside the rock. For a more complex ecosystem, we need to look inside oceans. There is almost certainly liquid water below the icy crust of several of the moons of the giant planets – especially Europa (a moon of Jupiter) and Enceladus (a moon of Saturn). These are warm inside because of tidal heating, and the way-sub-zero surface and lack of any atmosphere are irrelevant. Moreover, there is evidence that life on Earth began at ‘hydrothermal vents’ on the ocean floor, where hot, chemically-rich, water seeps or gushes out. Microbes feed on that chemical energy, and more complex organisms graze on the microbes. No sunlight, and no plants are involved. Similar vents seem pretty likely inside these moons – so we have the right chemicals and the right conditions to start life – and to support a complex ecosystem. If there turns out to be no life under Europa’s ice them I think the odds of life being abundant around other stars will lengthen considerably.

We think that Europa’s ice is mostly more than 10 km thick, so establishing whether or not there is life down there wont be easy. Sometimes the surface cracks apart and slush is squeezed out to form ridges, and these may be the best target for a lander, which might find fossils entombed in the slush.

Enceladus is smaller and may not have such a rich ocean, but comes with the big advantage of spraying samples of its ocean into space though cracks near its south pole (similar plumes have been suspected at Europa, but not proven). A properly equipped spaceprobe could fly through Enceladus’s eruption plumes and look for chemical or isotopic traces of life without needing to land.

I’m sure you’ll agree, moons are fascinating!

Headline image credit: Center of the Milky Way Galaxy, from NASA’S Marshall Space Flight Center. CC-BY-ND-2.0 via Flickr.

The post Are we alone in the Universe? appeared first on OUPblog.

0 Comments on Are we alone in the Universe? as of 1/1/1900
Add a Comment
4. Nicholson’s wrong theories and the advancement of chemistry

By Eric Scerri


The past couple of years have seen the celebration of a number of key developments in the history of physics. In 1913 Niels Bohr, perhaps the second most famous physicist of the 20th century after Einstein, published is iconic theory of the atom. Its main ingredient, which has propelled it into the scientific hall of fame, was it’s incorporation of the notion of the quantum of energy. The now commonplace view that electrons are in shells around the nucleus is a direct outcome of the quantization of their energy.

Between 1913 and 1914 the little known English physicist, Henry Moseley, discovered that the use of increasing atomic weights was not the best way to order the elements in the chemist’s periodic table. Instead, Moseley proposed using a whole number sequence to denote a property that he called the atomic number of an element. This change had the effect of removing the few remaining anomalies in the way that the elements are arranged in this icon of science that is found on the walls of lecture halls and laboratories all over the world. In recent years the periodic table has even become a cultural icon to be appropriated by artists, designers and advertisers of every persuasion.

But another scientist who was publishing articles at about the same time as Bohr and Moseley has been almost completely forgotten by all but a few historians of physics. He is the English mathematical physicist John Nicholson, who was in fact the first to suggest that the momentum of electrons in an atom is quantized. Bohr openly acknowledges this point in all his early papers.

Nicholson hypothesized the existence of what he called proto-elements that he believed existed in inter-stellar space and which gave rise to our familiar terrestrial chemical elements. He gave them exotic names like nebulium and coronium and using this idea he was able to explain many unassigned lines in the spectra of the solar corona and the major stellar nebulas such as the famous Crab nebula in the constellation of Orion. He also succeeded in predicting some hitherto unknown lines in each of these astronomical bodies.

The really odd thing is that Nicholson was completely wrong, or at least that’s how his ideas are usually regarded. How it is that supposedly ‘wrong’ theories can produce such advances in science, even if only temporarily?

Image Credit: Bio Lab. Photo by Amy. CC BY 2.0 via Amy Loves Yah Flickr.

Image Credit: Bio Lab. Photo by Amy. CC BY 2.0 via Amy Loves Yah Flickr.

Science progresses as a unified whole, not stopping to care about which scientist is successful or not, while being only concerned with overall progress. The attribution of priority and scientific awards, from a global perspective, is a kind of charade which is intended to reward scientists for competing with each other. On this view no scientific development can be regarded as being right or wrong. I like to draw an analogy with the evolution of species or organisms. Developments that occur in living organisms can never be said to be right or wrong. Those that are advantageous to the species are perpetuated while those that are not simply die away. So it is with scientific developments. Nicholson’s belief in proto-elements may not have been productive but his notion of quantization in atoms was tremendously useful and the baton was passed on to Bohr and all the quantum physicists who came later.

Instead of viewing the development of science through the actions of individuals and scientific heroes, a more holistic view is better to discern the whole process — including the work of lesser-known intermediate figures, such as Nicholson. The Dutch economist Anton den Broek first made the proposal that elements should be characterized by an ordinal number before Moseley had even begun doing physics. This is not a disputed point since Moseley begins one of his key papers by stating that he began his research in order to verify the van den Broek hypothesis on atomic number.

Another intermediate figure in the history of physics was Edmund Stoner who took a decisive step forward in assigning quantum numbers to each of the electrons in an atom while as a graduate student at Cambridge. In all there are four such quantum numbers which are used to specify precisely how the electrons are arranged first in shells, then sub-shells and finally orbitals in any atom. Stoner was responsible for applying the third quantum number. It was after reading Stoner’s article that the much more famous Wolfgang Pauli was able to suggest a fourth quantum number which later acquired the name of electron spin to describe a further degree of freedom for every electron in an atom.

Eric Scerri is a full-time chemistry lecturer at UCLA. Eric Scerri is a leading philosopher of science specializing in the history and philosophy of the periodic table. He is also the founder and editor in chief of the international journal Foundations of Chemistry and has been a full-time lecturer at UCLA for the past fifteen years where he regularly teaches classes of 350 chemistry students as well as classes in history and philosophy of science. He is the author of A Tale of Seven Elements, The Periodic Table: Its Story and Its Significance, and The Periodic Table: A Very Short Introduction.

Chemistry Giveaway! In time for the 2014 American Chemical Society fall meeting and in honor of the publication of The Oxford Handbook of Food Fermentations, edited by Charles W. Bamforth and Robert E. Ward, Oxford University Press is running a paired giveaway with this new handbook and Charles Bamforth’s other must-read book, the third edition of Beer. The sweepstakes ends on Thursday, August 14th at 5:30 p.m. EST.

Subscribe to the OUPblog via email or RSS.
Subscribe to only physics and chemistry articles on the OUPblog via email or RSS.

The post Nicholson’s wrong theories and the advancement of chemistry appeared first on OUPblog.

0 Comments on Nicholson’s wrong theories and the advancement of chemistry as of 8/10/2014 6:26:00 AM
Add a Comment
5. The 150th anniversary of Newlands’ discovery of the periodic system

The discovery of the periodic system of the elements and the associated periodic table is generally attributed to the great Russian chemist Dmitri Mendeleev. Many authors have indulged in the game of debating just how much credit should be attributed to Mendeleev and how much to the other discoverers of this unifying theme of modern chemistry.

In fact the discovery of the periodic table represents one of a multitude of multiple discoveries which most accounts of science try to explain away. Multiple discovery is actually the rule rather than the exception and it is one of the many hints that point to the interconnected, almost organic nature of how science really develops. Many, including myself, have explored this theme by considering examples from the history of atomic physics and chemistry.

But today I am writing about a subaltern who discovered the periodic table well before Mendeleev and whose most significant contribution was published on 20 August 1864, or precisely 150 years ago. John Reina Newlands was an English chemist who never held a university position and yet went further than any of his contemporary professional chemists in discovering the all-important repeating pattern among the elements which he described in a number of articles.

 John Reina Newlands. Image Credit: Public Domain via Wikimedia Commons.
John Reina Newlands. Public Domain via Wikimedia Commons.

Newlands came from Southwark, a suburb of London. After studying at the Royal College of chemistry he became the chief chemist at Royal Agricultural Society of Great Britain. In 1860 when the leading European chemists were attending the Karlsruhe conference to discuss such concepts as atoms, molecules and atomic weights, Newlands was busy volunteering to fight in the Italian revolutionary war under Garibaldi. This is explained by the fact that his mother was Italian descent, which also explains his having the middle name Reina. In any case he survived the fighting and set about thinking about the elements on his return to London to become a sugar chemist.

In 1863 Newlands published a list of elements which he arranged into 11 groups. The elements within each of his groups had analogous properties and displayed weights that differed by eight units or some factor of eight. But no table yet!

Nevertheless he even predicted the existence of a new element, which he believed should have an atomic weight of 163 and should fall between iridium and rhodium. Unfortunately for Newlands neither this element, or a few more he predicted, ever materialized but it does show that the prediction of elements from a system of elements is not something that only Mendeleev invented.

In the first of three articles of 1864 Newlands published his first periodic table, five years before Mendeleev incidentally. This arrangement benefited from the revised atomic weights that had been announced at the Karlsruhe conference he had missed and showed that many elements had weights differing by 16 units. But it only contained 12 elements ranging between lithium as the lightest and chlorine as the heaviest.

Then another article, on 20 August 1864, with a slightly expanded range of elements in which he dropped the use of atomic weights for the elements and replaced them with an ordinal number for each one. Historians and philosophers have amused themselves over the years by debating whether this represents an anticipation of the modern concept of atomic number, but that’s another story.

More importantly Newlands now suggested that he had a system, a repeating and periodic pattern of elements, or a periodic law. Another innovation was Newlands’ willingness to reverse pairs of elements if their atomic weights demanded this change as in the case of tellurium and iodine. Even though tellurium has a higher atomic weight than iodine it must be placed before iodine so that each element falls into the appropriate column according to chemical similarities.

The following year, Newlands had the opportunity to present his findings in a lecture to the London Chemical Society but the result was public ridicule. One member of the audience mockingly asked Newlands whether he had considered arranging the elements alphabetically since this might have produced an even better chemical grouping of the elements. The society declined to publish Newlands’ article although he was able to publish it in another journal.

In 1869 and 1870 two more prominent chemists who held university positions published more elaborate periodic systems. They were the German Julius Lothar Meyer and the Russian Dmitri Mendeleev. They essentially rediscovered what Newlands found and made some improvements. Mendeleev in particular made a point of denying Newlands’ priority claiming that Newlands had not regarded his discovery as representing a scientific law. These two chemists were awarded the lion’s share of the credit and Newlands was reduced to arguing for his priority for several years afterwards. In the end he did gain some recognition when the Davy award, or the equivalent of the Nobel Prize for chemistry at the time, and which had already been jointly awarded to Lothar Meyer and Mendeleev, was finally accorded to Newlands in 1887, twenty three years after his article of August 1864.

But there is a final word to be said on this subject. In 1862, two years before Newlands, a French geologist, Emile Béguyer de Chancourtois had already published a periodic system that he arranged in a three-dimensional fashion on the surface of a metal cylinder. He called this the “telluric screw,” from tellos — Greek for the Earth since he was a geologist and since he was classifying the elements of the earth.

Image: Chemistry by macaroni1945. CC BY 2.0 via Flickr.

The post The 150th anniversary of Newlands’ discovery of the periodic system appeared first on OUPblog.

0 Comments on The 150th anniversary of Newlands’ discovery of the periodic system as of 8/20/2014 7:43:00 AM
Add a Comment
6. Dmitri Mendeleev’s lost elements

Dmitri Mendeleev believed he was a great scientist and indeed he was. He was not actually recognized as such until his periodic table achieved worldwide diffusion and began to appear in textbooks of general chemistry and in other major publications. When Mendeleev died in February 1907, the periodic table was established well enough to stand on its own and perpetuate his name for upcoming generations of chemists.

The man died, but the myth was born.

Mendeleev as a legendary figure grew with time, aided by his own well-organized promotion of his discovery. Well-versed in foreign languages and with a sort of overwhelming desire to escape his tsar-dominated homeland, he traveled the length and breadth of Europe, attending many conferences in England, Germany, Italy, and central Europe, his only luggage seemingly his periodic table.

Dmitri Mendeleev, 1897. Public domain via Wikimedia Commons.

Mendeleev had succeeded in creating a new tool that chemists could use as a springboard to new and fascinating discoveries in the fields of theoretical, mineral, and general chemistry. But every coin has two faces, even the periodic table. On the one hand, it lighted the path to the discovery of still missing elements; on the other, it led some unfortunate individuals to fall into the fatal error of announcing the discovery of false or spurious supposed new elements. Even Mendeleev, who considered himself the Newton of the chemical sciences, fell into this trap, announcing the discovery of imaginary elements that presently we know to have been mere self-deception or illusion.

It probably is not well-known that Mendeleev had predicted the existence of a large number of elements, actually more than ten. Their discoveries were sometimes the result of lucky guesses (like the famous cases of gallium, germanium, and scandium), and at other times they were erroneous. Historiography has kindly passed over the latter, forgetting about the long line of imaginary elements that Mendeleev had proposed, among which were two with atomic weights lower than that of hydrogen, newtonium (atomic weight = 0.17) and coronium (Atomic weight = 0.4). He also proposed the existence of six new elements between hydrogen and lithium, whose existence could not but be false.

Mendeleev represented a sort of tormented genius who believed in the universality of his creature and dreaded the possibility that it could be eclipsed by other discoveries. He did not live long enough to see the seed that he had planted become a mighty tree. He fought equally, with fierce indignation, the priority claims of others as well as the advent of new discoveries that appeared to menace his discovery.

In the end, his table was enduring enough to accommodate atomic number, isotopes, radioisotopes, the noble gases, the rare earth elements, the actinides, and the quantum mechanics that endowed it with a theoretical framework, allowing it to appear fresh and modern even after a scientific journey of 145 years.

Image: Nursery of new stars by NASA, Hui Yang University of Illinois. Public domain via Wikimedia Commons.

The post Dmitri Mendeleev’s lost elements appeared first on OUPblog.

0 Comments on Dmitri Mendeleev’s lost elements as of 8/20/2014 7:43:00 AM
Add a Comment
7. The construction of the Cartesian System as a rival to the Scholastic Summa

René Descartes wrote his third book, Principles of Philosophy, as something of a rival to scholastic textbooks. He prided himself in ‘that those who have not yet learned the philosophy of the schools will learn it more easily from this book than from their teachers, because by the same means they will learn to scorn it, and even the most mediocre teachers will be capable of teaching my philosophy by means of this book alone’ (Descartes to Marin Mersenne, December 1640).

Still, what Descartes produced was inadequate for the task. The topics of scholastic textbooks ranged much more broadly than those of Descartes’ Principles; they usually had four-part arrangements mirroring the structure of the collegiate curriculum, divided as they typically were into logic, ethics, physics, and metaphysics.

But Descartes produced at best only what could be called a general metaphysics and a partial physics.

Knowing what a scholastic course in physics would look like, Descartes understood that he needed to write at least two further parts to his Principles of Philosophy: a fifth part on living things, i.e., animals and plants, and a sixth part on man. And he did not issue what would be called a particular metaphysics.

Frans_Hals_-_Portret_van_René_Descartes
Portrait of René Descartes by Frans Hans. Public domain via Wikimedia Commons.

Descartes, of course, saw himself as presenting Cartesian metaphysics as well as physics, both the roots and trunk of his tree of philosophy.

But from the point of view of school texts, the metaphysical elements of physics (general metaphysics) that Descartes discussed—such as the principles of bodies: matter, form, and privation; causation; motion: generation and corruption, growth and diminution; place, void, infinity, and time—were usually taught at the beginning of the course on physics.

The scholastic course on metaphysics—particular metaphysics—dealt with other topics, not discussed directly in the Principles, such as: being, existence, and essence; unity, quantity, and individuation; truth and falsity; good and evil.

Such courses usually ended up with questions about knowledge of God, names or attributes of God, God’s will and power, and God’s goodness.

Thus the Principles of Philosophy by itself was not sufficient as a text for the standard course in metaphysics. And Descartes also did not produce texts in ethics or logic for his followers to use or to teach from.

These must have been perceived as glaring deficiencies in the Cartesian program and in the aspiration to replace Aristotelian philosophy in the schools.

So the Cartesians rushed in to fill the voids. One could mention their attempts to complete the physics—Louis de la Forge’s additions to the Treatise on Man, for example—or to produce more conventional-looking metaphysics—such as Johann Clauberg’s later editions of his Ontosophia or Baruch Spinoza’s Metaphysical Thoughts.

Cartesians in the 17th century began to supplement the Principles and to produce the kinds of texts not normally associated with their intellectual movement, that is treatises on ethics and logic, the most prominent of the latter being the Port-Royal Logic (Paris, 1662).

By the end of the 17th century, the Cartesians, having lost many battles, ulti­mately won the war against the Scholastics.

The attempt to publish a Cartesian textbook that would mirror what was taught in the schools culminated in the famous multi-volume works of Pierre-Sylvain Régis and of Antoine Le Grand.

The Franciscan friar Le Grand initially published a popular version of Descartes’ philosophy in the form of a scholastic textbook, expanding it in the 1670s and 1680s; the work, Institution of Philosophy, was then translated into English together with other texts of Le Grand and published as An Entire Body of Philosophy according to the Principles of the famous Renate Descartes (London, 1694).

On the Continent, Régis issued his General System According to the Principles of Descartes at about the same time (Amsterdam, 1691), having had difficulties receiving permission to publish. Ultimately, Régis’ oddly unsystematic (and very often un-Cartesian) System set the standard for Cartesian textbooks.

By the end of the 17th century, the Cartesians, having lost many battles, ulti­mately won the war against the Scholastics. The changes in the contents of textbooks from the scholastic Summa at beginning of the 17th century to the Cartesian System at the end can enable one to demonstrate the full range of the attempted Cartesian revolution whose scope was not limited to physics (narrowly conceived) and its epistemology, but included logic, ethics, physics (more broadly conceived), and metaphysics.

Headline image credit: Dispute of Queen Cristina Vasa and René Descartes, by Nils Forsberg (1842-1934) after Pierre-Louis Dumesnil the Younger (1698-1781). Public domain via Wikimedia Commons.

The post The construction of the Cartesian System as a rival to the Scholastic Summa appeared first on OUPblog.

0 Comments on The construction of the Cartesian System as a rival to the Scholastic Summa as of 9/15/2014 9:34:00 AM
Add a Comment
8. CERN: glorious past, exciting future

Today, 60 years ago, the visionary convention establishing the European Organization for Nuclear Research – better known with its French acronym, CERN – entered into force, marking the beginning of an extraordinary scientific adventure that has profoundly changed science, technology, and society, and that is still far from over.

With other pan-European institutions established in the late 1940s and early 1950s — like the Council of Europe and the European Coal and Steel Community — CERN shared the same founding goal: to coordinate the efforts of European countries after the devastating losses and large-scale destruction of World War II. Europe had in particular lost its scientific and intellectual leadership, and many scientists had fled to other countries. Time had come for European researchers to join forces towards creating of a world-leading laboratory for fundamental science.

Sixty years after its foundation, CERN is today the largest scientific laboratory in the world, with more than 2000 staff members and many more temporary visitors and fellows. It hosts the most powerful particle accelerator ever built. It also hosts exhibitions, lectures, shows, meetings, and debates, providing a forum of discussion where science meets industry and society.

What has happened in these six decades of scientific research? As a physicist, I should probably first mention the many ground-breaking discoveries in Particle Physics, such as the discovery of some of the most fundamental building block of matter, like the W and Z bosons in 1983; the measurement of the number of neutrino families at LEP in 1989; and of course the recent discovery of the Higgs boson in 2012, which prompted the Nobel Prize in Physics to Peter Higgs and Francois Englert in 2013.

But looking back at the glorious history of this laboratory, much more comes to mind: the development of technologies that found medical applications such as PET scans; computer science applications such as globally distributed computing, that finds application in many fields ranging from genetic mapping to economic modeling; and the World Wide Web, that was developed at CERN as a network to connect universities and research laboratories.

CERN Control Center (2).jpg
“CERN Control Center (2)” by Martin Dougiamas – Flickr: CERN control center. Licensed under CC BY 2.0 via Wikimedia Commons.

If you’ve ever asked yourself what such a laboratory may look like, especially if you plan to visit it in the future and expect to see building with a distinctive sleek, high-tech look, let me warn you that the first impression may be slightly disappointing. When I first visited CERN, I couldn’t help noticing the old buildings, dusty corridors, and the overall rather grimy look of the section hosting the theory institute. But it was when an elevator brought me down to visit the accelerator that I realized what was actually happening there, as I witnessed the colossal size of the detectors, and the incredible degree of sophistication of the technology used. ATLAS, for instance, is a 25 meters high, 25 meters wide and 45 meters long detector, and it weighs about 7,000 tons!

The 27-km long Large Hadron Collider is currently shut down for planned upgrades. When new beams of protons will be circulated in it at the end of 2014, it will be at almost twice the energy reached in the previous run. There will be about 2800 bunches of protons in its orbit, each containing several hundred billion protons, separated by – as in a car race, the distance between bunches can be expressed in units of time – 250 billionths of a second. The energy of each proton will be compared to that of a flying mosquito, but concentrated in a single elementary particle. And the energy of an entire bunch of protons will be comparable to that of a medium-sized car launched at highway speed.

Why these high energies? Einstein’s E=mc2 tells us that energy can be converted to mass, so by colliding two protons with very high energy, we can in principle produce very heavy particles, possibly new particles that we have never before observed. You may wonder why we would expect that such new particles exist. After all we have already successfully created Higgs bosons through very high-energy collisions, what can we expect to find beyond that? Well, that’s where the story becomes exciting.

Some of the best motivated theories currently under scrutiny in the scientific community – such as Supersymmetry – predict that not only should new particles exist, but they could explain one of the greatest mysteries in Cosmology: the presence of large amounts of unseen matter in the Universe, which seem to dominate the dynamics of all structures in the Universe, including our own Milky Way galaxy — Dark Matter.

Identifying in our accelerators the substance that permeates the Universe and shapes its structure would represent an important step forward in our quest to understand the Cosmos, and our place in it. CERN, 60 years and still going strong, is rising up to challenge.

Headline image credit: An example of simulated data modeled for the CMS particle detector on the Large Hadron Collider (LHC) at CERN. Image by Lucas Taylor, CERN. CC BY-SA 3.0 via Wikimedia Commons.

The post CERN: glorious past, exciting future appeared first on OUPblog.

0 Comments on CERN: glorious past, exciting future as of 1/1/1900
Add a Comment
9. Celebrating 60 years of CERN

2014 marks not just the centenary of the start of World War I, and the 75th anniversary of World War II, but on 29 September it is 60 years since the establishment of CERN, the European Centre for Nuclear Research or, in its modern form, Particle Physics. Less than a decade after European nations had been fighting one another in a terrible war, 12 of those nations had united in science. Today, CERN is a world laboratory, famed for having been the home of the world wide web, brainchild of then CERN scientist Tim Berners-Lee; of several Nobel Prizes for physics, although not (yet) for Peace; and most recently, for the discovery of the Higgs Boson. The origin of CERN, and its political significance, are perhaps no less remarkable than its justly celebrated status as the greatest laboratory of scientific endeavour in history.

Its life has spanned a remarkable period in scientific culture. The paradigm shifts in our understanding of the fundamental particles and the forces that control the cosmos, which have occurred since 1950, are in no small measure thanks to CERN.

In 1954, the hoped for simplicity in matter, where the electron and neutrino partner a neutron and proton, had been lost. Novel relatives of the proton were proliferating. Then, exactly 50 years ago, the theoretical concept of the quark was born, which explains the multitude as bound states of groups of quarks. By 1970 the existence of this new layer of reality had been confirmed, by experiments at Stanford, California, and at CERN.

During the 1970s our understanding of quarks and the strong force developed. On the one hand this was thanks to theory, but also due to experiments at CERN’s Intersecting Storage Rings: the ISR. Head on collisions between counter-rotating beams of protons produced sprays of particles, which instead of flying in all directions, tended to emerge in sharp jets. The properties of these jets confirmed the predictions of quantum chromodynamics – QCD – the theory that the strong force arises from the interactions among the fundamental quarks and gluons.

CERN had begun in 1954 with a proton synchrotron, a circular accelerator with a circumference of about 600 metres, which was vast at the time, although trifling by modern standards. This was superseded by a super-proton synchrotron, or SPS, some 7 kilometres in circumference. This fired beams of protons and other particles at static targets, its precision measurements building confidence in the QCD theory and also in the theory of the weak force – QFD, quantum flavourdynamics.

Cern - Public Domain
The Globe of Science and Innovation. CC0 via Pixabay

QFD brought the electromagnetic and weak forces into a single framework. This first step towards a possible unification of all forces implied the existence of W and Z bosons, analogues of the photon. Unlike the massless photon, however, the W and Z were predicted to be very massive, some 80 to 90 times more than a proton or neutron, and hence beyond reach of experiments at that time. This changed when the SPS was converted into a collider of protons and anti-protons. By 1984 experiments at the novel accelerator had discovered the W and Z bosons, in line with what QFD predicted. This led to Nobel Prizes for Carlo Rubbia and Simon van der Meer, in 1984.

The confirmation of QCD and QFD led to a marked change in particle physics. Where hitherto it had sought the basic templates of matter, from the 1980s it turned increasingly to understanding how matter emerged from the Big Bang. For CERN’s very high-energy experiments replicate conditions that were prevalent in the hot early universe, and theory implies that the behaviour of the forces and particles in such circumstances is less complex than at the relatively cool conditions of daily experience. Thus began a period of high-energy particle physics as experimental cosmology.

This raced ahead during the 1990s with LEP – the Large Electron Positron collider, a 27 kilometre ring of magnets underground, which looped from CERN towards Lake Geneva, beneath the airport and back to CERN, via the foothills of the Jura Mountains. Initially designed to produce tens of millions of Z bosons, in order to test QFD and QCD to high precision, by 2000 its performance was able to produce pairs of W bosons. The precision was such that small deviations were found between these measurements and what theory implied for the properties of these particles.

The explanation involved two particles, whose subsequent discoveries have closed a chapter in physics. These are the top quark, and the Higgs Boson.

As gaps in Mendeleev’s periodic table of the elements in the 19th century had identified new elements, so at the end of the 20th century a gap in the emerging pattern of particles was discerned. To complete the menu required a top quark.

The precision measurements at LEP could be explained if the top quark exists, too massive for LEP to produce directly, but nonetheless able to disturb the measurements of other quantities at LEP courtesy of quantum theory. Theory and data would agree if the top quark mass were nearly two hundred times that of a proton. The top quark was discovered at Fermilab in the USA in 1995, its mass as required by the LEP data from CERN.

As the 21st century dawned, all the pieces of the “Standard Model” of particles and forces were in place, but one. The theories worked well, but we had no explanation of why the various particles have their menu of masses, or even why they have mass at all. Adding mass into the equations by hand is like a band-aid, capable of allowing computations that agree with data to remarkable precision. However, we can imagine circumstances, where particles collide at energies far beyond those accessible today, where the theories would predict nonsense — infinity as the answer for quantities that are finite, for example. A mathematical solution to this impasse had been discovered fifty years ago, and implied that there is a further massive particle, known as the Higgs Boson, after Peter Higgs who, alone of the independent discoveries of the concept, drew attention to some crucial experimental implications of the boson.

Discovery of the Higgs Boson at CERN in 2012 following the conversion of LEP into the LHC – Large Hadron Collider – is the climax of CERN’s first 60 years. It led to the Nobel Prize for Higgs and Francois Englert, theorists whose ideas initiated the quest. Many wondered whether the Nobel Foundation would break new ground and award the physics prize to a laboratory, CERN, for enabling the experimental discovery, but this did not happen.

CERN has been associated with other Nobel Prizes in Physics, such as to Georges Charpak, for his innovative work developing methods of detecting radiation and particles, which are used not just at CERN but in industry and hospitals. CERN’s reach has been remarkable. From a vision that helped unite Europe, through science, we have seen it breach the Cold War, with collaborations in the 1960s onwards with JINR, the Warsaw Pact’s scientific analogue, and today CERN has become truly a physics laboratory for the world.

The post Celebrating 60 years of CERN appeared first on OUPblog.

0 Comments on Celebrating 60 years of CERN as of 1/1/1900
Add a Comment
10. A record-breaking lunar impact

By Jose M. Madiedo


On 11 September 2013, an unusually long and bright impact flash was observed on the Moon. Its peak luminosity was equivalent to a stellar magnitude of around 2.9.

What happened? A meteorite with a mass of around 400 kg hit the lunar surface at a speed of over 61,000 kilometres per hour.

Rocks often collide with the lunar surface at high speed (tens of thousands of kilometres per hour) and are instantaneously vaporised at the impact site. This gives rise to a thermal glow that can be detected by telescopes from Earth as short duration flashes. These flashes, in general, last just a fraction of a second.

The extraordinary flash in September was recorded from Spain by two telescopes operating in the framework of the Moon Impacts Detection and Analysis System (MIDAS). These devices were aimed to the same area in the night side of the Moon. With a duration of over eight seconds, this is the brightest and longest confirmed impact flash ever recorded on the Moon.

Click here to view the embedded video.

Our calculations show that the impact, which took place at 20:07 GMT, created a new crater with a diameter of around 40 meters in Mare Nubium. This rock had a size raging between 0.6 and 1.4 metres. The impact energy was equivalent to over 15 tons of TNT under the assumption of a luminous efficiency of 0.002 (the fraction of kinetic energy converted into visible radiation as a consequence of the hypervelocity impact).

The detection of impact flashes is one of the techniques suitable to analyze the flux of incoming bodies to the Earth. One of the characteristics of the lunar impacts monitoring technique is that it is not possible to unambiguously associate an impact flash with a given meteoroid stream. Nevertheless, our analysis shows that the most likely scenario is that the impactor had a sporadic origin (i.e., was not associated to any known meteoroid stream). From the analysis of this event we have learnt that that one metre-sized objects may strike our planet about ten times as often as previously thought.

Dr. Jose Maria Madiedo is a professor at Universidad de Huelva. He is the author of “A large lunar impact blast on 2013 September 11” in the most recent issue of the Monthly Notices of the Royal Astronomical Society.

Monthly Notices of the Royal Astronomical Society is one of the world’s leading primary research journals in astronomy and astrophysics, as well as one of the longest established. It publishes the results of original research in astronomy and astrophysics, both observational and theoretical.

Subscribe to the OUPblog via email or RSS.
Subscribe to only physics and chemistry articles on the OUPblog via email or RSS.

The post A record-breaking lunar impact appeared first on OUPblog.

0 Comments on A record-breaking lunar impact as of 3/7/2014 7:10:00 AM
Add a Comment
11. Minority women chemists yesterday and today

By Jeannette Brown


As far as we know, the first African American woman PhD was Dr. Marie Daly in 1947. I am still searching for an earlier one.

Women chemists, especially minority women chemists, have always been the underdogs in science and chemistry. African American women were not allowed to pursue a PhD degree in chemistry until the late in the twentieth century, while white women were pursuing that degree in the late nineteenth and early twentieth century.

Racial prejudice was a major factor. Many African American men were denied access to this degree in the United States. The list of those who were able to receive a PhD in chemistry is short. The Knox brothers were able to receive PhDs in chemistry from MIT and Harvard in the 1930s. Some men had to go abroad to get a degree; Percy Julian obtained his from the University of Vienna in Austria.

In 1975, the American Association for the Advancement of Science sponsored a meeting of minority women scientists to explore what it was like to be both a woman and minority in science. The meeting resulted in a report entitled The Double Bind: The Price of being a Minority woman in Science. Most of the women experienced strong negative influences associated with race or ethnicity as children and teenagers but felt more strongly the handicaps for women as they moved into post-college training in graduate schools or later in careers. When the women entered their career stage, they encountered both racism and sexism.

STS-47 Mission Specialist Mae Jemison in the center aisle of the Spacelab Japan (SLJ) science module aboard the Earth-orbiting Endeavour, Orbiter Vehicle (OV) 105. NASA. Public domain via Wikimedia Commons

STS-47 Mission Specialist Mae Jemison in the center aisle of the Spacelab Japan (SLJ) science module aboard the Earth-orbiting Endeavour, Orbiter Vehicle (OV) 105. NASA. Public domain via Wikimedia Commons.

This is still true today in some respects, but it is often unconscious. For example, the organizers of an International Conference for Quantum Chemistry recently posted a list of the speakers. They were all men (the race of the speakers is not known). Three women who are pillars in the field protested and started a petition to add women to the speakers list. The organizers retracted the speaker list.

In 2009 the National Science Foundation sponsored a Women of Color conference. When I attended the meeting and listened to the speakers, it sounded as if not much had changed for women in science. There is still racism and sexism. Even Asian-American women, who do not constitute a minority within the field, were experiencing the same problems.

The 2010 Bayer Facts of Science Education XIV Survey polled 1,226 female and minority chemists and chemical engineers about their childhood, academic, and workplace experiences. The report stated that, girls are not encouraged to study STEM (science, technology, engineering, and mathematics) field early in school, 60% colleges and universities discourage women in science, and 44% of professors discourage female students from pursing STEM degrees.

The top three reasons for the underrepresentation are:

  • Lack of quality education in math and science in poor school districts
  • Stereotypes that the STEM isn’t for girls
  • Financial problems related to the cost of college education


In spite of all the negative information in these reports, women are pursuing STEM careers. In the National Organization for the Professional Advancement of Black Chemists and Chemical Engineers (NOBCChE) women dominate the organization. Years ago, men dominated that organization. The current vice president of the organization is a woman chemical engineer, who is is striving to make the organization better. Many of the NOBCChE female members went to Historically Black Colleges (HBCUs) for undergraduate degree before getting into major universities to obtain their PhD. The HBCUs are the savior for African American students because the professors and administration strive to help them succeed in college.

I am amazed at all these African American women scientists have done in spite of racism and sexism — succeeding and thriving in industry, working as professors and department chairs in major research universities, and providing role models to young women and men who are contemplating a STEM career.

Jeannette Elizabeth Brown is the author of African American Women Chemists. She is a former Faculty Associate at the New Jersey Institute of Technology. She is the 2004 Société de Chimie Industrielle (American Section) Fellow of the Chemical Heritage Foundation, and consistently lectures on African American women in chemistry.

Subscribe to the OUPblog via email or RSS.
Subscribe to only physics and chemistry articles on the OUPblog via email or RSS.

The post Minority women chemists yesterday and today appeared first on OUPblog.

0 Comments on Minority women chemists yesterday and today as of 3/8/2014 8:56:00 AM
Add a Comment
12. 8 марта 1979: Women’s Day in the Soviet Union

By Marjorie Senechal


“March 8 is Women’s Day, a legal holiday,” I wrote to my mother from Moscow. “This is one of the many cute cards that is on sale now, all with flowers somewhere on them. We hope March 8 finds you well and happy, and enjoying an early spring! Alas, here it is -30° C again.”

Soviet Women's Day card

Soviet-era Women’s Day card. Public Domain via Radio Free Europe Radio Liberty.

I spent the 1978-79 academic year working in Moscow in the Soviet Academy of Science’s Institute of Crystallography. I’d been corresponding with a scientist there for several years and when I heard about the exchange program between our nations’ respective Academies, I applied for it. Friends were horrified. The Cold War was raging, and Afghanistan rumbled in the background. But scientists understand each other, just like generals do. I flew to Moscow, family in tow, early in October. The first snow had fallen the night before; women in wool headscarves were sweeping the airport runways with birch brooms.

None of us spoke Russian well when we arrived; this was immersion. We lived on the fourteenth floor of an Academy-owned apartment building with no laundry facilities and an unreliable elevator. It was a cold winter even by Russian standards, plunging to -40° on the C and F scales (they cross there). On weekdays, my daughters and I trudged through the snow to the broad Leninsky Prospect. The five-story brick Institute sat on the near side, and the girls went to Soviet public schools on the far side, behind a large department store. The underpass was a thriving illegal free-market where pensioners sold hard-to-find items like phone books, mushrooms, and used toys. Nearing the schools, we ran the ever-watchful Grandmother Gauntlet. In this country of working mothers, bundled bescarved grandmothers shopped, cooked, herded their charges, and bossed everyone in sight: Put on your hat! Button up your children!

At the Institute, I was supposed to be escorted to my office every day, but after a few months the guards waved me on. I couldn’t stray in any case: the doors along the corridors were always closed. Was I politically untouchable?

But the office was a friendly place. I shared it with three crystallographers: Valentina, Marina, and the professor I’d come to work with. We exchanged language lessons and took tea breaks together. Colleagues stopped by, some to talk shop, some for a haircut (Marina ran a business on the side). Scientists understand each other. My work took new directions.

I also tried to work with a professor from Moscow State University. He was admired in the west and I had listed him as a contact on my application. But this was one scientist I never understood. He arrived late for our appointments at the Institute without excuses or apologies. I was, I soon surmised, to write papers for him, not with him. I held my tongue, as I thought befits a guest, until the February afternoon he showed up two weeks late. Suddenly the spirit of the grandmothers possessed me. “How dare you!” I yelled in Russian. “Get out of here and don’t come back!” “Take some Valium” Valentina whispered; wherever had she found it? But she was as proud as she was worried. The next morning I was untouchable no more: doors opened wide and people greeted me cheerily, “Hi! How’s it going?”

International Women’s Day, with roots in suffrage, labor, and the Russian Revolution, became a national holiday in Russia in 1918, and is still one today. In 1979, the cute postcards and flowers looked more like Mother’s Day cards, but men still gave gifts to the women they worked with. On 7 March I was fêted, along with the Institute’s female scientists, lab technicians, librarians, office staff, and custodians. I still have the large copper medal, unprofessionally engraved in the Institute lab. “8 марта” — 8 March — it says on one side, the lab initials and the year on the other. The once-pink ribbon loops through a hole at the top. Maybe they gave medals to all of us, or maybe I earned it for throwing the professor out of the Institute.

Women's Day medal, courtesy of the author.

Women’s Day medal, courtesy of  Marjorie Senechal.

I’ve returned to Russia many times; I’ve witnessed the changes. Science is changing too; my host, the Academy of Sciences founded by Peter the Great in 1724, may not reach its 300th birthday. But my friends are coping somehow, and I still feel at home there. A few years ago I flew to Moscow in the dead of winter for Russia’s gala nanotechnology kickoff. A young woman met me at the now-ultra-modern airport. She wore smart boots, jeans, and a parka to die for. “Put your hat on!” she barked in English as she led me to the van. “Zip up your jacket!

Marjorie Senechal is the Louise Wolff Kahn Professor Emerita in Mathematics and History of Science and Technology, Smith College, and Co-Editor of The Mathematical Intelligencer. She is author of I Died for Beauty: Dorothy Wrinch and the Cultures of Science.

Subscribe to the OUPblog via email or RSS.
Subscribe to only physics and chemistry articles on the OUPblog via email or RSS.

The post 8 марта 1979: Women’s Day in the Soviet Union appeared first on OUPblog.

0 Comments on 8 марта 1979: Women’s Day in the Soviet Union as of 3/8/2014 8:56:00 AM
Add a Comment
13. BICEP2 finds gravitational waves from near the dawn of time

By Andrew Liddle


The cosmology community is abuzz with news from the BICEP2 experiment of the discovery of primordial gravitational waves, through their signature in the cosmic microwave background. If verified, this will be a clear indication that the very young universe underwent a period of acceleration, known as cosmic inflation. During this period, it is thought that the seeds were laid down for all the structures to form later in the universe, including galaxies, stars, and indeed ourselves.

The cosmic microwave background (CMB) is radiation left over from the Hot Big Bang, first discovered in 1965 and corresponding to a temperature only about 2.7 degrees above absolute zero. In 1992 the COBE satellite made the first detection of temperature variations in the CMB, and successive experiments, including satellite missions WMAP and Planck, have been accurately measuring these variations which have become the key tool to understanding our universe.

In addition to its brightness, radiation can have a polarisation, meaning that the electromagnetic oscillations that make up the light have a preferred orientation, e.g. horizontal or vertical. This same effect is used in 3D cinemas, where light of different polarisations reaches your left or right eye, the lenses in the glasses blocking out one or other from each eye. In the CMB the polarisation signal is very small, and moreover comes in two types, known as E-mode and B-mode polarisation. The second of these, corresponding to a twisting pattern of polarisation on the sky, is what BICEP2 has discovered for the first time. This twisting pattern is the signature of gravitational waves, created in the early universe and whose presence causes space-time itself to ‘wobble’ as the light from the CMB crosses the Universe.

The Dark Sector Laboratory at Amundsen-Scott South Pole Station. At left is the South Pole Telescope. At right is the BICEP2 telescope. Photo by Amble, 2009. CC-BY-SA-3.0 via Wikimedia Commons.

The Dark Sector Laboratory at Amundsen-Scott South Pole Station. At left is the South Pole Telescope. At right is the BICEP2 telescope. Photo by Amble, 2009. CC-BY-SA-3.0 via Wikimedia Commons.

The BICEP2 team have been working for several years with the single aim of measuring this signal; inflation predicted it to be there but said nothing about its strength. Based at the South Pole, where the unusually clear and dry air creates an ideal viewpoint for accurate measurement, three years of observations were carried out from 2010 to 2012. Their experiment differs from others measuring the CMB polarisation because they focussed on covering as large an area of the sky as possible, at relatively moderate angular resolution, in order to specifically target the B-mode signal.

While the discovery of gravitational waves had been widely rumoured in the days leading up to the announcement, including even the size of the measured signal, what took everyone’s breath away was the significance of the signal. At 6 to 7-sigma, it exceeds even the gold-standard 5-sigma used at CERN for the Higgs particle detection. Most would have expected something tentative, 2 or 3-sigma perhaps. We will want verification, of course, especially because the use of just a single wavelength of observation (the microwave equivalent of using just one colour of the rainbow) means the experiment is a little vulnerable to radiation from sources other than the CMB, such as intervening galaxies or emission caused by particles spiralling around our own Milky Way’s magnetic fields. The strength of the detection suggests that will not be an issue, but for sure we want to see independent confirmation by other experiments and at other wavelengths. Some may have announcements even before the end of the year, including the Planck satellite mission.

The response of the cosmology community to BICEP2 has been staggeringly swift. Early communication and discussion was already underway during the web-streamed BICEP2 press conference, via a Facebook discussion group set up by Scott Dodelson at Fermilab. The first science papers using the results were already appearing on arXiv.org database within the next couple of days (including these ones by me!). By the end of March, only two weeks after the announcement, there were already almost 50 available papers with ‘BICEP’ in the title, written by researchers all around the world. Papers on BICEP2 are clearly going to be a main theme for astronomy journals, including MNRAS, for the remainder of the year as we all try to figure out what, in detail, it all means.

Andrew Liddle is Professor of Theoretical Astrophysics at the Institute for Astronomy, University of Edinburgh. He is an editor of the OUP astronomy journal Monthly Notices of the Royal Astronomical Society.

Monthly Notices of the Royal Astronomical Society (MNRAS) is one of the world’s leading primary research journals in astronomy and astrophysics, as well as one of the longest established. It publishes the results of original research in astronomy and astrophysics, both observational and theoretical.

Subscribe to the OUPblog via email or RSS.
Subscribe to only physics and chemistry articles on the OUPblog via email or RSS.

The post BICEP2 finds gravitational waves from near the dawn of time appeared first on OUPblog.

0 Comments on BICEP2 finds gravitational waves from near the dawn of time as of 4/11/2014 5:25:00 AM
Add a Comment
14. 18 facts you never knew about cheese

Have you often lain awake at night, wishing that you knew more about cheese? Fear not! Your prayers have been answered; below you will find 18 of the most delicious cheese facts, all taken from Michael Tunick’s recent book The Science of Cheese. Prepare to be the envy of everyone at your next dinner party – just try not to be too “cheesy”. Bon Appétit!

800px-Weichkaese_SoftCheese

  1. The world’s most expensive cheese comes from a Swedish moose farm and the cheese sells for £300 a pound.
  2. You can’t make cheese entirely from human milk since it won’t coagulate properly.
  3. The largest cheese ever made was a Cheddar weighing 56,850 pounds, in 1989.
  4. 97% of British people are ‘Lactose Persistent’ and are the most lactose tolerant population in the world.
  5. Genuine Flor de Guia cheese must be made in the Canary Islands by women, otherwise it won’t be considered the genuine article.
  6. The expression “cheesy” used to mean first-rate, but sarcastic use of the word has caused it to mean the opposite.
  7. The bacteria used for smear-ripened cheeses are closely related to the bacteria that generates sweaty feet odour.
  8. Cheese as we know it today was (accidentally) discovered over 8,000 years ago when milk separated into curds and whey.
  9. Edam was used as cannonballs (and killed two soldiers) in a battle between Montevideo and Buenos Aires in 1841.
  10. An odour found in tomcat urine is considered desirable in Cheddar.
  11. Each American adult consumes an average of 33 pounds of cheese each year.
  12. Descriptions of the defects in the eyes of Swiss-type cheeses include the terms “blowhole” and “frogmouth”.
  13. There are over 1,265,000 dairy cows in the US state of Wisconsin alone.
  14. A northern Italian bank uses Parmesan as loan collateral.
  15. Sardinia’s Cazu Marzu, which means ‘rotten cheese’, is safe to eat only if it contains live maggots.
  16. Cheese consumption in the United Kingdom is at a measly 24.0 pounds per capita.
  17. This cheese consumption isn’t even close to Greece who lead the way with a whopping 68.4 pounds per capita.
  18. Dmitri Mendeleev was a consultant on artisanal cheese production while he was also inventing the periodic table of the elements.

All of these cheese facts are taken from The Science of Cheese. The Science of Cheese is an engaging tour of the science and history of cheese, and the only book to discuss the actual chemistry, biology, and physics of cheese making. Author Michael Tunick is a research chemist with the Dairy and Functional Foods Research Unit of the U.S. Department of Agriculture’s Agricultural Research Service.

Subscribe to the OUPblog via email or RSS.
Subscribe to only physics and chemistry articles on the OUPblog via email or RSS.
Image credit: Weichkaese Soft Cheese. Photo by Eva K. CC BY-NC-ND 3.0 via Wikimedia Commons.

The post 18 facts you never knew about cheese appeared first on OUPblog.

0 Comments on 18 facts you never knew about cheese as of 4/26/2014 7:39:00 AM
Add a Comment
15. Inferring the unconfirmed: the no alternatives argument

By Richard Dawid, Stephan Hartmann, and Jan Sprenger


“When you have eliminated the impossible, whatever remains, however improbable, must be the truth.” Thus Arthur Conan Doyle has Sherlock Holmes describe a crucial part of his method of solving detective cases. Sherlock Holmes often takes pride in adhering to principles of scientific reasoning. Whether or not this particular element of his analysis can be called scientific is not straightforward to decide, however. Do scientists use ‘no alternatives arguments’ of the kind described above? Is it justified to infer a theory’s truth from the observation that no other acceptable theory is known? Can this be done even when empirical confirmation of the theory in question is sketchy or entirely absent?

The Edinburgh Statue of Sherlock Holmes. Siddharth Krish CC-BY-SA 3.0 Wikimedia.

The Edinburgh Statue of Sherlock Holmes. Photo by Siddharth Krish. CC-BY-SA 3.0 via Wikimedia Commons.

The canonical understanding of scientific reasoning insists that theory confirmation be based exclusively on empirical data predicted by the theory in question. From that point of view, Holmes’ method may at best play the role of a side show; the real work of theory evaluation is done by comparing the theory’s predictions with empirical data.

Actual science often tells a different story. Scientific disciplines like palaeontology or archaeology aim at describing historic events that have left only scarce traces in today’s world. Empirical testing of those theories always remains fragmentary. Under such conditions, assessing a theory’s scientific status crucially relies on the question of whether or not convincing alternative theories have been found.

Just recently, this kind of reasoning scored a striking success in theoretical physics when the Higgs particle was discovered at CERN. Besides confirming the Higgs model itself, the Higgs discovery also vindicated the judgemental prowess of theoretical physicists who were fairly sure about the existence of the Higgs particle already since the mid-1980s. Their assessment had been based on a clear-cut no alternatives argument: there seemed to be no alternative to the Higgs model that could render particle physics consistent.

Similarly, string theory is one of the most influential theories in contemporary physics, even in the absence of favorable empirical evidence and the ability to generate specific predictions. Critics argue that for these reasons, trust in string theory is unjustified, but defenders deploy the no alternatives argument: since the physics community devoted considerable efforts to developing alternatives to string theory, the failure of these attempts and the absence of similarly unified and worked-out competitors provide a strong argument in favor of string theory.

These examples show that the no alternatives argument is in fact used in science. But does it constitute a legitimate way of reasoning? In our work, we aim at identifying the structural basis for the no alternatives argument. We do so by constructing a formal model of the argument with the help of so-called Bayesian nets. That is, the argument is analyzed as a case of reasoning under uncertainty about whether a scientific theory H (e.g. string theory) is right or wrong.

A Bayes nets that captures the inferential relations between the relevant propositions in the no alternatives argument. D=complexity of the problem, F=failure to find an alternative, Y=number of alternatives, T=H is the right theory.

A Bayes nets that captures the inferential relations between the relevant propositions in the no alternatives argument. D=complexity of the problem, F=failure to find an alternative, Y=number of alternatives, T=H is the right theory.

We argue that the failure of finding a viable alternative to theory H, in spite of many attempts by clever scientists, lowers our expectations on the number of existing serious alternatives to H. This provides in turn an argument that H is indeed the right theory. In total, the probability that H is right is increased by the failure to find an alternative, demonstrating that the inference behind the no alternatives argument is valid in principle.

There is an important caveat, however. Based on the no alternatives argument alone, we cannot say how much the probability of the theory in question is raised. It may be substantial, but it may only be a tiny little bit. In that case, the confirmatory force of the no alternatives argument may be negligible.

The no alternatives argument thus is a fascinating mode of reasoning that contains a valid core. However, determining the strength of the argument requires going beyond the mere observation that no alternatives have been found. This matter is highly context-sensitive and may lead to different answers for string theory, paleontology and detective stories.

Richard Dawid, Stephan Hartmann, and Jan Sprenger are the authors of “The No Alternatives Argument” (available to read for free for a limited time) in the British Journal for the Philosophy of Science. Richard Dawid is lecturer (Dozent) and researcher at the University of Vienna. Stephan Hartmann is Alexander von Humboldt Professor at the LMU Munich. Jan Sprenger is Assistant Professor at Tilburg University. Their work focuses on the application of probabilistic methods within the philosophy of science.

For over fifty years The British Journal for the Philosophy of Science has published the best international work in the philosophy of science under a distinguished list of editors including A. C. Crombie, Mary Hesse, Imre Lakatos, D. H. Mellor, David Papineau, James Ladyman, and Alexander Bird. One of the leading international journals in the field, it publishes outstanding new work on a variety of traditional and cutting edge issues, such as the metaphysics of science and the applicability of mathematics to physics, as well as foundational issues in the life sciences, the physical sciences, and the social sciences.

Subscribe to the OUPblog via email or RSS.
Subscribe to only philosophy articles on the OUPblog via email or RSS.

The post Inferring the unconfirmed: the no alternatives argument appeared first on OUPblog.

0 Comments on Inferring the unconfirmed: the no alternatives argument as of 4/27/2014 4:05:00 AM
Add a Comment
16. Feynman diagrams and the fly in the ointment

By Tom Lancaster and Stephen J. Blundell


Sometimes it’s the fly in the ointment, the thing that spoils the purity of the whole picture, which leads to the big advances in science. That’s exactly what happened at a conference in Shelter Island, New York in 1947 when a group of physicists gathered to discuss the latest breakthroughs in their field which seemed at first sight to make everything more complicated.

Isidor Rabi reported experimental results from Columbia University that showed that the g-factor for the electron, a property reflecting its magnetic moment, was not precisely two, as Paul Dirac’s beautiful theory of the electron had predicted, but came out to be a messy 2.00244 (though the modern value is very slightly lower than this). And Willis Lamb, also at Columbia, explained how two energy levels in the hydrogen atom which were supposed (again according to Dirac) to be coincident were very slightly displaced from each other (an effect now known as the Lamb shift).

These were apparently messy, annoying and disruptive results that ruined a pure, dignified and elegant theory. But physicists like a challenge, and the conference attendees included Hans Bethe, Julian Schwinger, and Richard Feynman, all three of whom would attack the problem. The key insight was to realize that there are a multitude of quantum processes that can occur, and which had been forgotten. An electron is not just an electron, but is surrounded by a cloud of virtual particles: photons, electrons, and antielectrons, popping in and out of existence. These higher order processes are most pictorially described by Feynman diagrams, simple cartoons containing dots, arrows and wiggly lines, each one a shorthand for a mathematical term in a complex calculation but summarizing a physical interaction in an elegant form.

Feynman Diagram

These diagrams can be used to show how the basic interaction between electrons and light is altered by quantum processes, an effect which tweaks its magnetic moment. This slightly shifts the “g-factor” and gives a prediction which has been verified experimentally to many decimal places. It also affects the way in which the spin and orbital angular momentum behave and this can be used to explain the Lamb shift. These tiny effects signal a vacuum that is not empty but teeming with quantum life, myriad interactions shimmering around every particle

Feynman diagrams first appeared in print sixty-five years ago this year, so they have now reached statutory retirement age. But rather than being put out to grass, Feynman’s cartoons are still used to make calculations and describe physical processes. They are at the foundation of modern quantum field theory, and if we ever figure out how to make a theory of quantum gravity, it is pretty likely Feynman diagrams will be in the description. It’s a reminder of why detailed measurements are needed in physics. Those little discrepancies can lead to revolutions in understanding.

Tom Lancaster was a Research Fellow in Physics at the University of Oxford, before becoming a Lecturer at the University of Durham in 2012. Stephen J. Blundell is a Professor of Physics at the University of Oxford and a Fellow of Mansfield College, Oxford. They are co-authors of Quantum Field Theory for the Gifted Amateur.

Subscribe to the OUPblog via email or RSS.
Subscribe to only physics and chemistry articles on the OUPblog via email or RSS.

The post Feynman diagrams and the fly in the ointment appeared first on OUPblog.

0 Comments on Feynman diagrams and the fly in the ointment as of 1/1/1900
Add a Comment
17. The Man in the Monkeynut Coat and the men in the yellow jerseys

By Kersten Hall


It is a safe bet that the name of Pierre Rolland rings very few bells among the British public. In 2012, Rolland, riding for Team Europcar finished in eighth place in the overall final classifications of the Tour de France whilst Sir Bradley Wiggins has since become a household name following his fantastic achievement of being the first British person ever to win the most famous cycle race in the world.

In the world of sport, we remember a winner. But the history of science is often also described in similar terms – as a tale of winners and losers racing to the finish line. Nowhere is this more true than in the story of the discovery of the structure of DNA. When James Watson’s book, The Double Helix was published in 1968, it depicted science as a frantic and often ruthless race in which the winner clearly took all. In Watson’s account, it was he and his Cambridge colleague Francis Crick who were first to cross the finish line, with their competitors Rosalind Franklin at Kings College, London and Linus Pauling at Caltech, Pasadena trailing in behind.

There is no denying the importance of Watson and Crick’s achievement: their double-helical model of DNA not only answered fundamental questions in biology such as how organisms pass on hereditary traits from one generation to the next but also heralded the advent of genetic engineering and the production of vital new medicines such as recombinant insulin. But it is worth asking whether this portrayal of science as a breathless race to the finish line with only winners and losers, is necessarily an accurate one. And perhaps more importantly, does it actually obscure the way that science really works?

William Astbury. Reproduced with the permission of Leeds University Library

William Astbury. Reproduced with the permission of Leeds University Library

To illustrate this point, it is worth remembering that Watson and Crick obtained a vital clue to solving the double-helix thanks to a photograph taken by the crystallographer Rosalind Franklin. Labelled in her lab notes as ‘Photo 51′, it showed a pattern of black spots arranged in the shape of a cross, formed when X-rays were diffracted by fibres of DNA. The effect of this image on Watson was dramatic. The sight of the black cross, he later said, made his jaw drop and pulse race for he knew that this pattern could only arise from a molecule that was helical in shape.

In recognition of its importance in the discovery of the double-helical structure of DNA, a plaque on the wall outside King’s College, London where Franklin worked now hails ‘Photo 51‘ as being ‘one of the world’s most important photographs’. Yet curiously, neither Watson nor Franklin had been the first to observe this striking cross pattern. For almost a year earlier, the physicist William Astbury working in his lab at Leeds had obtained an almost identical X-ray diffraction pattern of DNA.

Yet despite obtaining this clue that would prove to be so vital to Watson and Crick, Astbury never solved the double-helical structure himself and whilst the Cambridge duo went to win the Nobel Prize for their work, Astbury remains largely forgotten.

But to dismiss him as a mere ‘also-ran’ in the race for the double-helix would be both harsh and hasty: the questions that Astbury was asking and the aims of his research were subtly but significantly different to those of Watson and Crick. The Cambridge duo were solely focussed on DNA, whereas Astbury felt that by studying a wide range of biological fibres from wool to bacterial flagella, he might uncover some deep common theme based on molecular shape that could unify the whole of biology. It was this emphasis on the molecular shape of fibres and how these shapes could change that formed his core definition of the new science of ‘molecular biology’ which he helped to found and popularise, and one that has had a profound impact on modern biology and medicine.

On 5th July this year, Leeds will host ‘Le Grand Depart’ – the start of the 2014 Tour de France. As the contestants begin to climb the hills of Yorkshire each will no doubt harbour dreams of wearing the coveted yellow jersey and all will have their sights firmly fixed on crossing the same ultimate finishing line. At first sight scientific discovery may also appear to be a race towards a single finish line, but in truth it is a much more muddled affair rather like a badly organised school sports day in which several races all taking place in different directions and over different distances became jumbled together. For this reason it makes little sense to think of Astbury as having ‘lost’ the race for DNA to Watson and Crick. That Leeds was chosen to host the start of the 2014 Tour de France, is an honour for which the city can take pride, but in the life and work of William Astbury it also has a scientific heritage of which it can be equally proud.

Kersten Hall is graduated from St. Anne’s College, Oxford with a degree in biochemistry, before embarking on a PhD at the University of Leeds using molecular biology to study how viruses evade the human immune system. He then worked as a Research Fellow in the School of Medicine at Leeds during which time he developed a keen interest in the historical and philosophical roots of molecular biology. He is now Visiting Fellow in the School of Philosophy, Religion and History of Science, where his research focuses on the origins of molecular biology and in particular the role of the pioneering physicist William T. Astbury and the work of Sir William and Lawrence Bragg. He is the author of The Man in the Monkeynut Coat.

Subscribe to the OUPblog via email or RSS.
Subscribe to only physics and chemistry articles on the OUPblog via email or RSS.
Image credit: William Astbury, Reproduced with the permission of Leeds University Library

The post The Man in the Monkeynut Coat and the men in the yellow jerseys appeared first on OUPblog.

0 Comments on The Man in the Monkeynut Coat and the men in the yellow jerseys as of 1/1/1900
Add a Comment
18. True or false? Ten myths about Isaac Newton

By Sarah Dry


Nearly three hundred years since his death, Isaac Newton is as much a myth as a man. The mythical Newton abounds in contradictions; he is a semi-divine genius and a mad alchemist, a somber and solitary thinker and a passionate religious heretic. Myths usually have an element of truth to them but how many Newtonian varieties are true? Here are ten of the most common, debunked or confirmed by the evidence of his own private papers, kept hidden for centuries and now freely available online.

10. Newton was a heretic who had to keep his religious beliefs secret.

True. While Newton regularly attended chapel, he abstained from taking holy orders at Trinity College. No official excuse survives, but numerous theological treatises he left make perfectly clear why he refused to become an ordained clergyman, as College fellows were normally obliged to do. Newton believed that the doctrine of the Trinity, in which the Father, the Son and the Holy Ghost were given equal status, was the result of centuries of corruption of the original Christian message and therefore false. Trinity College’s most famous fellow was, in fact, an anti-Trinitarian.

9. Newton never laughed.

False, but only just. There are only two specific instances that we know of when the great man laughed. One was when a friend to whom he had lent a volume of Euclid’s Elements asked what the point of it was, ‘upon which Sir Isaac was very merry.’ (The point being that if you have to ask what the point of Euclid is, you have already missed it.) So far, so moderately funny. The second time Newton laughed was during a conversation about his theory that comets inevitably crash into the stars around which they orbit. Newton noted that this applied not just to other stars but to the Sun as well and laughed while remarking to his interlocutor John Conduitt ‘that concerns us more.’

8. Newton was an alchemist.

True. Alchemical manuscripts make up roughly one tenth of the ten million words of private writing that Newton left on his death. This archive contains very few original treatises by Newton himself, but what does remain tells us in minute detail how he assessed the credibility of mysterious authors and their work. Most are copies of other people’s writings, along with recipes, a long alchemical index and laboratory notebooks. This material puzzled and disappointed many who encountered it, such as biographer David Brewster, who lamented ‘how a mind of such power, and so nobly occupied with the abstractions of geometry, and the study of the material world, could stoop to be even the copyist of the most contemptible alchemical work, the obvious production of a fool and a knave.’ While Brewster tried to sweep Newton’s alchemy under the rug, John Maynard Keynes made a splash when he wrote provocatively that Newton was the ‘last of the magicians’ rather than the ‘first king of reason.’

7. Newton believed that life on earth (and most likely on other planets in the universe) was sustained by dust and other vital particles from the tails of comets.

True. In Book 3 of the Principia, Newton wrote extensively how the rarefied vapour in comet’s tails was eventually drawn to earth by gravity, where it was required for the ‘conservation of the sea, and fluids of the planets’ and was most likely responsible for the ‘spirit’ which makes up the ‘most subtle and useful part of our air, and so much required to sustain the life of all things with us.’

6. Newton was a self-taught genius who made his pivotal discoveries in mathematics, physics and optics alone in his childhood home of Woolsthorpe while waiting out the plague years of 1665-7.

False, though this is a tricky one. One of the main treasures that scholars have sought in Newton’s papers is evidence for his scientific genius and for the method he used to make his discoveries. It is true that Newton’s intellectual achievement dwarfed that of his contemporaries. It is also true that as a 23 year-old, Newton made stunning progress on the calculus, and on his theories of gravity and light while on a plague-induced hiatus from his undergraduate studies at Trinity College. Evidence for these discoveries exists in notebooks which he saved for the rest of his life. However, notebooks kept at roughly the same time, both during his student days and his so called annus mirabilis, also demonstrate that Newton read and took careful notes on the work of leading mathematicians and natural philosophers, and that many of his signature discoveries owe much to them.

GodfreyKneller-IsaacNewton-1689

5. Newton found secret numerological codes in the Bible.

True. Like his fellow analysts of scripture, Newton believed there were important meanings attached to the numbers found there. In one theological treatise, Newton argues that the Pope is the anti-Christ based in part on the appearance in Scripture of the number of the name of the beast, 666. In another, he expounds on the meaning of the number 7, which figures prominently in the numbers of trumpets, vials and thunders found in Revelation.

4. Newton had terrible handwriting, like all geniuses.

False. Newton’s handwriting is usually clear and easy to read. It did change somewhat throughout his life. His youthful handwriting is slightly more angular, while in his old age, he wrote in a more open and rounded hand. More challenging than deciphering his handwriting is making sense of Newton’s heavily worked-over drafts, which are crowded with deletions and additions. He also left plenty of very neat drafts, especially of his work on church history and doctrine, which some considered to be suspiciously clean, evidence, said his 19th century cataloguers, of Newton’s having fallen in love with his own hand-writing.

3. Newton believed the earth was created in seven days.

True. Newton believed that the Earth was created in seven days, but he assumed that the duration of one revolution of the planet at the beginning of time was much slower than it is today.

2. Newton discovered universal gravitation after seeing an apple fall from a tree.

False, though Newton himself was partly responsible for this myth. Seeking to shore up his legacy at the end of his life, Newton told several people, including Voltaire and his friend William Stukeley, the story of how he had observed an apple falling from a tree while waiting out the plague in Woolsthorpe between 1665-7. (He never said it hit him on the head.) At that time Newton was struck by two key ideas—that apples fall straight to the center of the earth with no deviation and that the attractive power of the earth extends beyond the upper atmosphere. As important as they are, these insights were not sufficient to get Newton to universal gravitation. That final, stunning leap came some twenty years later, in 1685, after Edmund Halley asked Newton if he could calculate the forces responsible for an elliptical planetary orbit.

1. Newton was a virgin.

Almost certainly true. One bit of evidence comes via Voltaire, who heard it from Newton’s physician Richard Mead and wrote it up in his Letters on England, noting that unlike Descartes, Newton was ‘never sensible to any passion, was not subject to the common frailties of mankind, nor ever had any commerce with women.’ More substantively, there is Newton’s lifelong status as a self-proclaimed godly bachelor who berated his friend Locke for trying to ‘embroil’ him with women and who wrote passionately about how other godly men struggled to tame their lust.

Sarah Dry is a writer, independent scholar, and a former post-doctoral fellow at the London School of Economics. She is the author of The Newton Papers: The Strange and True Odyssey of Isaac Newton’s Manuscripts. She blogs at sarahdry.wordpress.com and tweets at @SarahDry1.

Subscribe to the OUPblog via email or RSS.
Subscribe to only physics and chemistry articles on the OUPblog via email or RSS.
Image credit: Portrait of Isaac Newton by Sir Godfrey Kneller. Public domain via Wikimedia Commons.

The post True or false? Ten myths about Isaac Newton appeared first on OUPblog.

0 Comments on True or false? Ten myths about Isaac Newton as of 1/1/1900
Add a Comment
19. Boxes and paradoxes

By Marjorie Senechal


It was eerie, a gift from the grave. But I thank serendipity, not spooks. The gift, it turns out, was given forty years ago. When Dorothy Wrinch cleared out her office in the Smith College Science Center, she left her books for the library, her burgeoning notebooks and contentious correspondence for the archives, and three boxes of crystal models and model parts for me. But I was on sabbatical, and whoever stashed the boxes in the basement never told me. They’d be there still had a young colleague not gone rummaging for something else last fall and found them, “For Mrs. Senechal” pencilled on the top. And so they reached me at last. Forty years ago, I would have treasured these models as she had. But what can I do with them now?

Bring them to Montreal for show-and-tell? Crystallographers from all over the world are gathering there for their triennial Congress. The year 2014 is a special anniversary. On the eve World War I, an undergraduate at the University of Cambridge, William Lawrence Bragg, walking along the river behind his college, found the Rosetta Stone of the solid state. The then-recent discovery that crystals scatter x-rays had solved for the x: the mysterious rays are waves, like light. Bragg turned this around, deciphering the structures of simple crystals from the patterns in their scattered rays. Today’s textbooks trace the path from his work on table salt and diamond to the double helix, modern drug design, and the highest of high-tech materials. We forget that the path was neither easy nor straight. The boxes of chipped and scattered model parts Wrinch left me bear witness to the early years, when scientists argued over whether salt is really the 3-D atomic checkerboard Bragg said it was, whether proteins are chains or rings as Wrinch said they were, and how to interpret the diffraction patterns of mind-bogglingly complicated crystals.

But the boxes are bulky and too heavy for airlines that charge by the ounce. So what should I do with them? I’m deeply touched by the gift; I won’t throw them out. But if they were ever user-friendly, they aren’t anymore. It’s hard to fit the rods into the balls, and the paint on the balls is flaking. And who needs real models now, when we have vivid, interactive computer graphics on our iPads? (Let’s get that one out of the way: real models are still working tools for me and I’m not alone.) No, it’s not their aged parts, it’s their aged ideas that make these models obsolete.

Figure 1. A ball from the box of model parts that Dorothy Wrinch left for me.

One book Wrinch didn’t leave for the library was a massive, gilded tome called Grammar of Ornament. It’s a cornerstone of the decorative arts, a veritable catalogue of rectangular swatches of floor, wall, and ceiling patterns created by people in all times and places. She loved this book because ornaments are like 2-D crystals. This analogy was crystallography’s chief paradigm, questioned by no one: the atoms in crystals repeat periodically in space. If you know one swatch (crystallographers call it a unit cell), you know the whole thing. A Grammar of Crystals would be a catalogue of swatches of 3-D atomic patterns. But that was then. Swatches are to modern crystallography as Pythagoras’s whole-number ratios are to √2 and pi. They’re still useful, but they’re not the whole story. The world of crystals, like the world of numbers, turns out to be bigger than anyone imagined.

Look closely at Wrinch’s wooden balls (Figure 1). The holes are drilled at the corners of squares, and at the centers of those squares, and at the centers of their edges. Six squares make a cube; if you picked up a ball and turned it around, you’d see the cubic pattern. With balls like these and rods to connect them, you can build 3-D swatches that stack like bricks to fill space. And that’s all you can build. But as the last century drew to a close, this paradigm crumbled. There are crystals, we now know, whose atomic patterns don’t repeat like ornaments. They spring surprises at every turn (Figure 2).

Figure 2. Left: To create this pattern, just fit the swatches together. Right: How would you extend this swatchless pattern?

Figure 2. Left: To create this pattern, just fit the swatches together. Right: How would you extend this swatchless pattern?

Aperiodic crystals have opened a new chapter; what will its paradigms be? At this still-early stage, we conjecture, argue, explore the new terrain from every angle. It’s fitting, and telling, that the Montreal Congress will be a double celebration. If ever a scientific discovery changed the world, x-ray crystallography did. But, paradoxically, the Congress will give its plums and prizes this year to the scientists who consigned its paradigm to history’s basement and sent us back to basics.

Figure 3. Compare the flexibility of this modern Zome Tool connector with its rigid ancestor in Figure 1.

Figure 3. Compare the flexibility of this modern Zome Tool connector with its rigid ancestor in Figure 1.

Figure 4. A model of an actual non-repeating crystal structure  made with Zome Tools by my students at the Park City Mathematics Institute, July 2014. Though aperiodic, this pattern of atoms can be extended in space.

Figure 4. A model of an actual non-repeating crystal structure made with Zome Tools by my students at the Park City Mathematics Institute, July 2014. Though aperiodic, this pattern of atoms can be extended in space.

I’ll put Wrinch’s models back in storage. She wouldn’t mind. “A science which hesitates to forget its founders is lost,” Alfred North Whitehead declared in 1916. A mature science, he explained, reconfigures itself as a logical structure from which the arguments and passions that built it are erased. Dorothy, then a student of his colleague Bertrand Russell, took the logical structure of science as a challenge. Later, when she ventured into less abstract realms, their reconfiguration was her mission. She would be delighted, I think, that so much of crystallography is automated today, and that the Grammar of Crystals is a databank. She would be delighted by new vistas to be reconfigured with modern models. And she would be delighted that crystallographers are still arguing.

Marjorie Senechal is the Louise Wolff Kahn Professor Emerita in Mathematics and History of Science and Technology, Smith College, and Co-Editor of The Mathematical Intelligencer. She is author of I Died for Beauty: Dorothy Wrinch and the Cultures of Science. She will be attending the International Union Of Crystallography Congress in Montreal 5-12 August 2014.

Chemistry Giveaway! In time for the 2014 American Chemical Society fall meeting and in honor of the publication of The Oxford Handbook of Food Fermentations, edited by Charles W. Bamforth and Robert E. Ward, Oxford University Press is running a paired giveaway with this new handbook and Charles Bamforth’s other must-read book, the third edition of Beer. The sweepstakes ends on Thursday, August 14th at 5:30 p.m. EST.

Subscribe to the OUPblog via email or RSS.
Subscribe to only physics and chemistry articles on the OUPblog via email or RSS.
Image Credit: Photos by Marjorie Senechal.

The post Boxes and paradoxes appeared first on OUPblog.

0 Comments on Boxes and paradoxes as of 8/7/2014 2:30:00 PM
Add a Comment
20. Extending patent protections to discover the next life-saving drugs

By Jie Jack Li


At the end of last year, Eli Lilly’s mega-blockbuster antidepressant Cymbalta went off patent. Cymbalta’s generic version, known as duloxetine, rushed in to the market and drove down the price, making it more affordable.

Great news for everyone, right? Well, not quite.

Indeed, generic competition is a great boon to the payer and the patient. On the other hand, the makers of the brand medicine can lose about 70% of the revenue. Without sustained investment in drug discovery and development, there will be fewer and fewer lifesaving drugs, not really a scenario the patient wants. Cymbalta had sales of $6.3 billion last year. Combined with Zyprexa, which lost patent protection in 2011, Lilly lost $10 billion in annual sales from these two drugs alone. The company responded by freezing salaries and slashing 30% of its sales force.

Image Credit: Chris Potter via Creative Commons

Prescription Prices. Photo by Chris Potter, StockMonkeys.com. CC BY 2.0 via Flickr.

Lilly is not alone in this quandary. In 2011, Pfizer lost its $13 billion drug Lipitor, the best-selling drug ever, which made “merely” $2.3 billion in 2013. Of course Pfizer became the number one drug company by swallowing Warner-Lambert, Pharmacia, and Wyeth, shutting down many research sites that were synonyms to the American pharmaceutical industry, and shedding tens of thousands of jobs. Meanwhile, Merck lost its US marketing exclusivity of its asthma drug Singulair (montekulast) in 2012 and saw a 97% decline in US sales in 4Q12 compared with 4Q11. Merck announced in October last year that it would cut 8,500 jobs on top of the 7,500 layoffs planned earlier. Bristol-Myers Squibb’s Plavix (clopidogrel)’s peak sales were $7 billion, ranking the second best-selling drug ever. After Plavix lost its patent protection in May 2012, the sales were $258 million last year. Meanwhile BMS has shrunk from 43,000 to 28,000 employees in the last decade.

Generics competition is not the only woe that big Pharma are facing. Outsourcing Pharma jobs to China and India, M&A, and economic downturn rendered thousands of highly paid and highly educated scientists to scramble for alternative employments, many outside the drug industry. With numerous site closures, outsourcing cost reductions, and downsizing, some 150,000 in Pharma lost their jobs from 2009 through 2012, according to consulting firm Challenger, Gray & Christmas. Such a brain drain makes us the lost generation of American drug discovery scientists, including this author. In contrast, Japanese drug companies refused to improve the bottom line through mass layoffs of R&D staff, a decision will likely benefit productivity in the long run.

What can we do to ensure the health of the drug industry and sustain the output of lifesaving medicines? Realizing that there is no single prescription for this issue, one could certainly begin talking about patent reform.

Current patent system is antiquated as far as innovative drugs are concerned. Decades ago, 17 years of patent life was somewhat adequate for the drug companies to recoup their investment in R&D because the life cycle from discovery to marketing at the time was relatively short and the cost was lower. Today’s drug discovery and development is a completely new ballgame. First of all, the low-hanging fruits have been harvested, and it is becoming increasingly challenging to create novel drugs, especially the ones that are “first-in-class” medicines. Second of all, the clinical trials are longer and use more patients, increasing the cost and eating into patent life. The latest statistics say that it takes $1.3 billion to take a drug from idea to market after taking the failed drugs’ costs into account. This is the major reason why prescription drugs are so expensive because pharmaceutical companies need to recoup their investment so that they will have money to invest in discovering future new life-saving medicines. Therefore, today’s patent life of 20 years (the patent life was extended from 17 years to 20 since 1995) is insufficient for medicines, especially the ones that are “first-in-class.”

Therefore, patent life for innovative medicines should be extended because the risk is the highest, as is the failure rate. Since the life cycle from idea to regulatory approval is getting longer and longer, it would make more sense if the patent clock started ticking after the drug is approved while exclusivity is still provided after the filing

The current compensation system for the discovery of lifesaving drugs is in a dire need of reform as well. Top executives are receiving millions in compensation even as the company is laying off thousands of employees to reduce cost. Recently, Glaxo Smith Kline announced that the company will pay significant bonuses to scientists who discover drugs. This is a good start.

The phenomenon of blockbuster drugs was a harbinger of the golden age of the pharmaceutical industry. Patients were happy because taking medicines was vastly cheaper than staying in the hospital. Shareholders were happy because huge profit was made and stocks for big Pharma used to be considered a sure bet.

Perhaps most importantly, the drug industry expanded and employed more and more scientists to its workforce. That employment in turn encouraged academia to train more students in science. America’s Science, Technology, Engineering, and Mathematics education was and still is the envy of the rest of the world. Maintaining that important reputation depends on a thriving pharmaceutical industry to provide jobs for our leading scientists and researchers. In turn they will reward us by discovering the next life-saving drugs.

Dr. Jie Jack Li is an associate professor at the University of San Francisco. He is the author of over 20 books on history of drug discovery, medicinal chemistry, and organic chemistry. His latest book being Blockbuster Drugs, The Rise and Decline of the Pharmaceutical Industry.

Chemistry Giveaway! In time for the 2014 American Chemical Society fall meeting and in honor of the publication of The Oxford Handbook of Food Fermentations, edited by Charles W. Bamforth and Robert E. Ward, Oxford University Press is running a paired giveaway with this new handbook and Charles Bamforth’s other must-read book, the third edition of Beer. The sweepstakes ends on Thursday, 14 August 2014 at 5:30 p.m. EST.

Subscribe to the OUPblog via email or RSS.
Subscribe to only physics and chemistry articles on the OUPblog via email or RSS.

The post Extending patent protections to discover the next life-saving drugs appeared first on OUPblog.

0 Comments on Extending patent protections to discover the next life-saving drugs as of 8/9/2014 7:06:00 AM
Add a Comment
21. The health benefits of cheese

By Michael H. Tunick


Lipids (fats and oils) have historically been thought to elevate weight and blood cholesterol and have therefore been considered to have a negative influence on the body. Foods such as full-fat milk and cheese have been avoided by many consumers for this reason. This attitude has been changing in recent years. Some authors are now claiming that consumption of unnecessary carbohydrates rather than fat is responsible for the epidemics of obesity and type 2 diabetes mellitus (T2DM). Most people who do consume milk, cheese, and yogurt know that the calcium helps with bones and teeth, but studies have shown that consumption of cheese and other dairy products appears to be beneficial in many other ways. Remember that cheese is a concentrated form of milk. Milk is 87% water and when it is processed into cheese, the nutrients are increased by a factor of ten. The positive attributes of milk are even stronger in cheese. Here are some examples involving protein:

Some bioactive peptides in casein (the primary protein in cheese) inhibit angiotensin-converting enzyme, which has been implicated in hypertension. Large studies have shown that dairy intake reduces blood pressure.

Cheese helps prevent tooth decay through a combination of bacterial inhibition and remineralization. Further, Lactoferrin, a minor milk protein found in cheese, has anticancer properties. It appears to keep cancer cells from proliferating.

Vitamins and minerals in cheese may not get enough credit. A meta-analysis of 16 studies showed that consumption of 200 g of cheese and other dairy products per day resulted in a 6% reduction of risk of T2DM, with a significant association between reduction of incidence of T2DM and intake of cheese, yogurt, and low-fat dairy products. Much of this may be due to vitamin K2, which is produced by bacteria in fermented dairy products.

Metabolic syndrome increases the risk for T2DM and heart disease, but research showed that the incidence of this syndrome decreased as dairy food consumption increased, a result that was associated with calcium intake.

Image Credit: State Library of South Australia via Creative Commons.

There is evidence that lipids in cheese are not unhealthy after all. Recent research has shown no connection between the intake of milk fat and the risk of cardiovascular disease, coronary heart disease, or stroke. A meta-analysis of 76 studies concluded that the evidence does not clearly support guidelines that encourage high consumption of polyunsaturated fatty acids and low consumption of total saturated fats.

Participants in a study who ate cheese and other dairy products at least once per day scored significantly higher in several tests of cognitive function compared with those who rarely or never consumed dairy food. These results appear to be due to a combination of factors.

Seemingly, the opposite of what people believe about cheese turns out to be the truth. Studies involving thousands of people over a period of years revealed that a high intake of dairy fat was associated with a lower risk of developing central obesity and a low dairy fat intake was associated with a higher risk of central obesity. Higher consumption of cheese has been associated with higher HDL (“good cholesterol”) and lower LDL (“bad cholesterol”), total cholesterol, and triglycerides.

All-cause mortality showed a reduction associated with dairy food intake in a meta-analysis of five studies in England and Wales covering 509,000 deaths in 2008. The authors concluded that there was a large mismatch between evidence from long-term studies and perceptions of harm from dairy foods.

Yes, some people are allergic to protein in cheese and others are vegetarians who don’t touch dairy products on principle. Many people can’t digest lactose (milk sugar) very well, but aged cheese contains little of it and lactose-free cheese has been on the market for years. But cheese is quite healthy for most consumers. Moderation in food consumption is always the key: as long as you eat cheese in reasonable amounts, you ought to have no ill effects while reaping the benefits.

Michael Tunick is a research chemist with the Dairy and Functional Foods Research Unit of the U.S. Department of Agriculture’s Agricultural Research Service. He is the author of The Science of Cheese. You can find out more things you never knew about cheese.

Chemistry Book Giveaway! In time for the 2014 American Chemical Society fall meeting and in honor of the publication of The Oxford Handbook of Food Fermentations, edited by Charles W. Bamforth and Robert E. Ward, Oxford University Press is running a paired giveaway with this new handbook and Charles Bamforth’s other must-read book, the third edition of Beer. The sweepstakes ends on Thursday, August 14th at 5:30 p.m. EST.

Subscribe to the OUPblog via email or RSS.
Subscribe to only physics and chemistry articles on the OUPblog via email or RSS.

Image credit: Hand milking a cow, by the State Library of Australia. CC-BY-2.0 via Wikimedia Commons.

The post The health benefits of cheese appeared first on OUPblog.

0 Comments on The health benefits of cheese as of 8/10/2014 6:26:00 AM
Add a Comment
22. How Nazi Germany lost the nuclear plot

By Gordon Fraser


When the Nazis came to power in Germany in 1933, neither the Atomic Bomb nor the Holocaust were on anybody’s agenda. Instead, the Nazi’s top aim was to rid German culture of perceived pollution. A priority was science, where paradoxically Germany already led the world. To safeguard this position, loud Nazi voices, such as Nobel laureate Philipp Lenard,  complained about a ‘massive infiltration of the Jews into universities’.

The first enactments of a new regime are highly symbolic. The cynically-named Law for the Restoration of the Civil Service, published in April 1933, targeted those who had non-Aryan, ‘particularly Jewish’, parents or grandparents. Having a single Jewish grandparent was enough to lose one’s job. Thousands of Jewish university teachers, together with doctors, lawyers, and other professionals were sacked. Some found more modest jobs, some retired, some left the country. Germany was throwing away its hard-won scientific supremacy. When warned of this, Hitler retorted ‘If the dismissal of [Jews] means the end of German science, then we will do without science for a few years’.

Why did the Jewish people have such a significant influence on German science? They had a long tradition of religious study, but assimilated Jews had begun to look instead to a radiant new role-model. Albert Einstein was the most famous scientist the world had ever known. As well as an icon for ambitious young students, he was also a prominent political target. Aware of this, he left Germany for the USA in 1932, before the Nazis came to power.

How to win friends and influence nuclear people
The talented nuclear scientist Leo Szilard appeared to be able to foresee the future. He exploited this by carefully cultivating people with influence. In Berlin, he sought out Einstein.

Like Einstein, Szilard anticipated the Civil Service Law. He also saw the need for a scheme to assist the refugee German academics who did not. First in Vienna, then in London, he found influential people who could help.

Just as the Nazis moved into power, nuclear physics was revolutionized by the discovery of a new nuclear component, the neutron. One of the main centres of neutron research was Berlin, where scientists saw a mysterious effect when uranium was irradiated. They asked their former Jewish colleagues, now in exile, for an explanation.

The answer was ‘nuclear fission’. As the Jewish scientists who had fled Germany settled into new jobs, they realized how fission was the key to a new source of energy. It could also be a weapon of unimaginable power, the Atomic Bomb. It was not a great intellectual leap, so the exiled scientists were convinced that their former colleagues in Germany had come to the same conclusion. So, when war looked imminent, they wanted to get to the Atomic Bomb first. One wrote of ‘the fear of the Nazis beating us to it’.

Szilard, by now in the US, saw it was time to act again. He knew that President Roosevelt would not listen to him, but would listen to Einstein, and wrote to Roosevelt over Einstein’s signature.

When a delegation finally managed to see him on 11 October 1939, Roosevelt said “what you’re after is to see that the Nazis don’t blow us up”. But nobody knew exactly what to do. The letter had mentioned bombs ‘too heavy for transportation by air’. Such a vague threat did not appear urgent.

But in 1940, German Jewish exiles in Britain realized that if the small amount of the isotope 235 in natural uranium could be separated, it could produce an explosion equivalent to several thousand tons of dynamite. Only a few kilograms would be needed, and could be carried by air. The logistics of nuclear weapons suddenly changed. Via Einstein, Szilard wrote another Presidential letter. On 19 January 1942, Roosevelt ordered a rapid programme for the development of the Atomic Bomb, the ‘Manhattan Project’.

Across the Atlantic, the Germans indeed had seen the implications of nuclear fission. But its scientific message had been muffled. Key scientists had gone. Germany had no one left with the prescience of Szilard, nor the political clout of Einstein. The Nazis also had another priority. On 20 January, one day after Roosevelt had given the go-ahead for the Atomic Bomb, a top-level meeting in the Berlin suburb of Wannsee outlined a “final solution of the Jewish Problem”. Nazi Germany had its own crash programme.

US crash programme – on 16 July 1945, just over three years after the huge project had been launched, the Atomic Bomb was tested in the New Mexico desert.

Nazi crash programme – what came to be known as the Holocaust rapidly got under way. Here a doomed woman and her children arrive at the specially-built Auschwitz-Birkenau extermination centre.

As such, two huge projects, unknown to each other, emerged simultaneously on opposite sides of the Atlantic. The dreadful schemes forged ahead, and each in turn became reality. On two counts, what had been unimaginable no longer was.

Gordon Fraser was for many years the in-house editor at CERN, the European Organization for Nuclear Research, in Geneva. His books on popular science and scientists include Cosmic Anger, a biography of Abdus Salam, the first Muslim Nobel scientist, Antimatter: The Ultimate Mirror, and The Quantum Exodus. He is also the editor of The New Physics for the 21st Century and The Particle Century.

Subscribe to the OUPblog via email or RSS.
Subscribe to only history articles on the OUPblog via email or RSS.
Subscribe to only physics and chemistry articles on the OUPblog via email or RSS.

Image credits: Atomic Bomb tested in the New Mexico desert. Photograph courtesy of  Los Alamos National Laboratory; Auschwitz-Birkenau, alte Frau und Kinder, Bundesarchiv Bild, Creative Commons License via Wikimedia Commons.

The post How Nazi Germany lost the nuclear plot appeared first on OUPblog.

0 Comments on How Nazi Germany lost the nuclear plot as of 12/13/2012 5:46:00 AM
Add a Comment
23. Celebrating Newton, 325 years after Principia

By Robyn Arianrhod


This year, 2012, marks the 325th anniversary of the first publication of the legendary Principia (Mathematical Principles of Natural Philosophy), the 500-page book in which Sir Isaac Newton presented the world with his theory of gravity. It was the first comprehensive scientific theory in history, and it’s withstood the test of time over the past three centuries.

Unfortunately, this superb legacy is often overshadowed, not just by Einstein’s achievement but also by Newton’s own secret obsession with Biblical prophecies and alchemy. Given these preoccupations, it’s reasonable to wonder if he was quite the modern scientific guru his legend suggests, but personally I’m all for celebrating him as one of the greatest geniuses ever. Although his private obsessions were excessive even for the seventeenth century, he was well aware that in eschewing metaphysical, alchemical, and mystical speculation in his Principia, he was creating a new way of thinking about the fundamental principles underlying the natural world. To paraphrase Newton himself, he changed the emphasis from metaphysics and mechanism to experiment and mathematical analogy. His method has proved astonishingly fruitful, but initially it was quite controversial.

He had developed his theory of gravity to explain the cause of the mysterious motion of the planets through the sky: in a nutshell, he derived a formula for the force needed to keep a planet moving in its observed elliptical orbit, and he connected this force with everyday gravity through the experimentally derived mathematics of falling motion. Ironically (in hindsight), some of his greatest peers, like Leibniz and Huygens, dismissed the theory of gravity as “mystical” because it was “too mathematical.” As far as they were concerned, the law of gravity may have been brilliant, but it didn’t explain how an invisible gravitational force could reach all the way from the sun to the earth without any apparent material mechanism. Consequently, they favoured the mainstream Cartesian “theory”, which held that the universe was filled with an invisible substance called ether, whose material nature was completely unknown, but which somehow formed into great swirling whirlpools that physically dragged the planets in their orbits.

The only evidence for this vortex “theory” was the physical fact of planetary motion, but this fact alone could lead to any number of causal hypotheses. By contrast, Newton explained the mystery of planetary motion in terms of a known physical phenomenon, gravity; he didn’t need to postulate the existence of fanciful ethereal whirlpools. As for the question of how gravity itself worked, Newton recognized this was beyond his scope — a challenge for posterity — but he knew that for the task at hand (explaining why the planets move) “it is enough that gravity really exists and acts according to the laws that we have set forth and is sufficient to explain all the motions of the heavenly bodies…”

What’s more, he found a way of testing his theory by using his formula for gravitational force to make quantitative predictions. For instance, he realized that comets were not random, unpredictable phenomena (which the superstitious had feared as fiery warnings from God), but small celestial bodies following well-defined orbits like the planets. His friend Halley famously used the theory of gravity to predict the date of return of the comet now named after him. As it turned out, Halley’s prediction was fairly good, although Clairaut — working half a century later but just before the predicted return of Halley’s comet — used more sophisticated mathematics to apply Newton’s laws to make an even more accurate prediction.

Clairaut’s calculations illustrate the fact that despite the phenomenal depth and breadth of Principia, it took a further century of effort by scores of mathematicians and physicists to build on Newton’s work and to create modern “Newtonian” physics in the form we know it today. But Newton had created the blueprint for this science, and its novelty can be seen from the fact that some of his most capable peers missed the point. After all, he had begun the radical process of transforming “natural philosophy” into theoretical physics — a transformation from traditional qualitative philosophical speculation about possible causes of physical phenomena, to a quantitative study of experimentally observed physical effects. (From this experimental study, mathematical propositions are deduced and then made general by induction, as he explained in Principia.)

Even the secular nature of Newton’s work was controversial (and under apparent pressure from critics, he did add a brief mention of God in an appendix to later editions of Principia). Although Leibniz was a brilliant philosopher (and he was also the co-inventor, with Newton, of calculus), one of his stated reasons for believing in the ether rather than the Newtonian vacuum was that God would show his omnipotence by creating something, like the ether, rather than leaving vast amounts of nothing. (At the quantum level, perhaps his conclusion, if not his reasoning, was right.) He also invoked God to reject Newton’s inspired (and correct) argument that gravitational interactions between the various planets themselves would eventually cause noticeable distortions in their orbits around the sun; Leibniz claimed God would have had the foresight to give the planets perfect, unchanging perpetual motion. But he was on much firmer ground when he questioned Newton’s (reluctant) assumption of absolute rather than relative motion, although it would take Einstein to come up with a relativistic theory of gravity.

Einstein’s theory is even more accurate than Newton’s, especially on a cosmic scale, but within its own terms — that is, describing the workings of our solar system (including, nowadays, the motion of our own satellites) — Newton’s law of gravity is accurate to within one part in ten million. As for his method of making scientific theories, it was so profound that it underlies all the theoretical physics that has followed over the past three centuries. It’s amazing: one of the most religious, most mystical men of his age put his personal beliefs aside and created the quintessential blueprint for our modern way of doing science in the most objective, detached way possible. Einstein agreed; he wrote a moving tribute in the London Times in 1919, shortly after astronomers had provided the first experimental confirmation of his theory of general relativity:

“Let no-one suppose, however, that the mighty work of Newton can really be superseded by [relativity] or any other theory. His great and lucid ideas will retain their unique significance for all time as the foundation of our modern conceptual structure in the sphere of [theoretical physics].”

Robyn Arianrhod is an Honorary Research Associate in the School of Mathematical Sciences at Monash University. She is the author of Seduced by Logic: Émilie Du Châtelet, Mary Somerville and the Newtonian Revolution and Einstein’s Heroes. Read her previous blog posts.

Subscribe to the OUPblog via email or RSS.
Subscribe to only science and medicine articles on the OUPblog via email or RSS.

The post Celebrating Newton, 325 years after Principia appeared first on OUPblog.

0 Comments on Celebrating Newton, 325 years after Principia as of 12/26/2012 8:15:00 AM
Add a Comment
24. Understanding the history of chemical elements

By Eric Scerri


After years of lagging behind physics and biology in the popularity stakes, the science of chemistry is staging a big come back, at least in one particular area. Information about the elements and the periodic table has mushroomed in popular culture. Children, movie stars, and countless others upload videos to YouTube of reciting and singing their way through lists of all the elements. Artists and advertisers have latched onto the iconic beauty of the periodic table with its elegant one hundred and eighteen rectangles containing one or two letters to denote each of the elements. T-shirts are constantly devised to spell out some snappy message using just the symbols for elements. If some words cannot quite be spelled out in this way designers just go ahead and invent new element symbols.

Moreover, the academic study of the periodic table has been undergoing a resurgence. In 2012 an International Conference, only the third one on this subject, was held in the historic city of Cuzco in Peru. Recent years have seen many new books and articles on the elements and the periodic table.

Exactly 100 years ago, in 1913, an English physicist, Henry Moseley discovered that the identity of each element was best captured by its atomic number or number of protons. Whereas the older approach had been to arrange the elements in order of increasing atomic weights, the use of Moseley’s atomic number revealed for the first time just how many elements were still missing from the old periodic table. It turned out to be precisely seven of them. Moseley’s discovery also provided a clear-cut method for identifying these missing elements through their spectra produced when any particular element is bombarded with X-ray radiation.

800px-Hf-TableImage

But even though the scientists knew which elements were missing and how to identify them, there were no shortage of priority disputes, claims, and counter-claims, some of which still persist to this day. In 1923 a Hungarian and a Dutchman working in the Niels Bohr Institute for Theoretical Physics discovered hafnium and named it after hafnia, the Latin name for the city of Copenhagen where the Institute is located. The real story, however, lies in the priority dispute that erupted initially between a French chemist Georges Urbain who claimed to have discovered this element, which he named celtium, as far back as 1911 and the team working in Copenhagen. With all the excesses of overt nationalism the British and French press supported the French claim because post-wartime sentiments persisted. The French press claimed, “Sa pue le boche” (It stinks of the Hun). The British press in slightly more restrained though no less chauvinistic terms announced that,

“We adhere to the original word celtium given to it by Urbain as a representative of the great French nation which was loyal to us throughout the war. We do not accept the name which was given it by the Danes who only pocketed the spoils of war.”

The irony was that Denmark had been neutral during the war but was presumably considered guilty by geographical proximity to Germany. Furthermore the French claim turned out to be spurious and the men from Copenhagen won the day and gained the right to name the new element after the city of its discovery.

Why are there so often priority debates in science? Generally speaking scientists have little to gain financially from their scientific discoveries. The one thing that is left to them is their ego and their claim to priority for which they will fight to the last. Another possibility is that women first discovered three or possibly four of the seven elements left to be discovered between the old boundaries of the periodic table (when it was still thought that there were just 92 elements). The three who definitely did discover elements were Lise Meitner, Ida Noddack, and Marguerite Perey from Austria, Germany, and France respectively. This is one of several areas in science where women have excelled, others being observational astronomy, research in radioactivity, and X-ray crystallography to name just a few.

One hundred years after the race began, these human stories spanning the two world wars continue to fascinate and provide new insight in the history of science.

Eric Scerri is a leading philosopher of science specializing in the history and philosophy of the periodic table. He is also the founder and editor in chief of the international journal Foundations of Chemistry and has been a full-time lecturer at UCLA for the past fourteen years where he regularly teaches classes of 350 chemistry students as well as classes in history and philosophy of science. He is the author of A Tale of Seven Elements, The Periodic Table: A Very Short Introduction, and The Periodic Table: Its Story and Its Significance. Read his previous blog posts.

Subscribe to the OUPblog via email or RSS.
Subscribe to only physics and chemistry articles on the OUPblog via email or RSS.
Image credit: Image by GreatPatton, released under terms of the GNU FDL in July 2003, via Wikimedia Commons.

The post Understanding the history of chemical elements appeared first on OUPblog.

0 Comments on Understanding the history of chemical elements as of 1/1/1900
Add a Comment
25. Six methods of detection in Sherlock Holmes

By James O’Brien


Between Edgar Allan Poe’s invention of the detective story with The Murders in the Rue Morgue in 1841 and Sir Arthur Conan Doyle’s first Sherlock Holmes story A Study in Scarlet in 1887, chance and coincidence played a large part in crime fiction. Nevertheless, Conan Doyle resolved that his detective would solve his cases using reason. He modeled Holmes on Poe’s Dupin and made Sherlock Holmes a man of science and an innovator of forensic methods. Holmes is so much at the forefront of detection that he has authored several monographs on crime-solving techniques. In most cases the well-read Conan Doyle has Holmes use methods years before the official police forces in both Britain and America get around to them. The result was 60 stories in which logic, deduction, and science dominate the scene.

FINGERPRINTS

Sherlock Holmes was quick to realize the value of fingerprint evidence. The first case in which fingerprints are mentioned is The Sign of Four, published in 1890, and he’s still using them 36 years later in the 55th story, The Three Gables (1926). Scotland Yard did not begin to use fingerprints until 1901.

It is interesting to note that Conan Doyle chose to have Holmes use fingerprints but not bertillonage (also called anthropometry), the system of identification by measuring twelve characteristics of the body. That system was originated by Alphonse Bertillon in Paris. The two methods competed for forensic ascendancy for many years. The astute Conan Doyle picked the eventual winner.

TYPEWRITTEN DOCUMENTS

As the author of a monograph entitled “The Typewriter and its Relation to Crime,” Holmes was of course an innovator in the analysis of typewritten documents. In the one case involving a typewriter, A Case of Identity (1891), only Holmes realized the importance of the fact that all the letters received by Mary Sutherland from Hosmer Angel were typewritten — even his name is typed and no signature is applied. This observation leads Holmes to the culprit. By obtaining a typewritten note from his suspect, Holmes brilliantly analyses the idiosyncrasies of the man’s typewriter. In the United States, the Federal Bureau of Investigation (FBI) started a Document Section soon after its crime lab opened in 1932. Holmes’s work preceded this by forty years.

HANDWRITING

Conan Doyle, a true believer in handwriting analysis, exaggerates Holmes’s abilities to interpret documents. Holmes is able to tell gender, make deductions about the character of the writer, and even compare two samples of writing and deduce whether the persons are related. This is another area where Holmes has written a monograph (on the dating of documents). Handwritten documents figure in nine stories. In The Reigate Squires, Holmes observes that two related people wrote the incriminating note jointly. This allows him to quickly deduce that the Cunninghams, father and son, are the guilty parties. In The Norwood Builder, Holmes can tell that Jonas Oldacre has written his will while riding on a train. Reasoning that no one would write such an important document on a train, Holmes is persuaded that the will is fraudulent. So immediately at the beginning of the case he is hot on the trail of the culprit.

FOOTPRINTS

Holmes also uses footprint analysis to identify culprits throughout his fictional career, from the very first story to the 57th story (The Lion’s Mane published in 1926). Fully 29 of the 60 stories include footprint evidence. The Boscombe Valley Mystery is solved almost entirely by footprint analysis. Holmes analyses footprints on quite a variety of surfaces: clay soil, snow, carpet, dust, mud, blood, ashes, and even a curtain. Yet another one of Sherlock Holmes’s monographs is on the topic (“The tracing of footsteps, with some remarks upon the uses of Plaster of Paris as a preserver of impresses”).

Dancing_men

CIPHERS

Sherlock Holmes solves a variety of ciphers. In The “Gloria Scott” he deduces that in the message that frightens Old Trevor every third word is to be read. A similar system was used in the American Civil War. It was also how young listeners of the Captain Midnight radio show in the 1940s used their decoder rings to get information about upcoming programs. In The Valley of Fear Holmes has a man planted inside Professor Moriarty’s organization. When he receives an encoded message Holmes must first realize that the cipher uses a book. After deducing which book he is able to retrieve the message. This is exactly how Benedict Arnold sent information to the British about General George Washington’s troop movements. Holmes’s most successful use of cryptology occurs in The Dancing Men. His analysis of the stick figure men left as messages is done by frequency analysis, starting with “e” as the most common letter. Conan Doyle is again following Poe who earlier used the same idea in The Gold Bug (1843). Holmes’s monograph on cryptology analyses 160 separate ciphers.

DOGS

Sherlock Holmes in "The Adventure of the Missing Three-Quarter." Illustration by Sidney Paget. Strand Magazine, 1904. Public domain via Wikimedia Commons. Conan Doyle provides us with an interesting array of dog stories and analyses. The most famous line in all the sixty stories, spoken by Inspector Gregory in Silver Blaze, is “The dog did nothing in the night-time.” When Holmes directs Gregory’s attention to “the curious incident of the dog in the night-time,” Gregory is puzzled by this enigmatic clue. Only Holmes seems to realize that the dog should have done something. Why did the dog make no noise when the horse, Silver Blaze, was led out of the stable in the dead of night? Inspector Gregory may be slow to catch on, but Sherlock Holmes is immediately suspicious of the horse’s trainer, John Straker. In Shoscombe Old Place we find exactly the opposite behavior by a dog. Lady Beatrice Falder’s dog snarled when he should not have. This time the dog doing something was the key to the solution. When Holmes took the dog near his mistress’s carriage, the dog knew that someone was impersonating his mistress. In two other cases Holmes employs dogs to follow the movements of people. In The Sign of Four, Toby initially fails to follow the odor of creosote to find Tonga, the pygmy from the Andaman Islands. In The Missing Three Quarter the dog Pompey successfully tracks Godfrey Staunton by the smell of aniseed. And of course, Holmes mentions yet another monograph on the use of dogs in detective work.

James O’Brien is the author of The Scientific Sherlock Holmes. He will be signing books at the OUP booth 524 at the American Chemical Society conference in Indiana on 9 September 2013 at 2:00 p.m. He is Distinguished Professor Emeritus at Missouri State University. A lifelong fan of Holmes, O’Brien presented his paper “What Kind of Chemist Was Sherlock Holmes” at the 1992 national American Chemical Society meeting, which resulted in an invitation to write a chapter on Holmes the chemist in the book Chemistry and Science Fiction. He has since given over 120 lectures on Holmes and science. Read his previous blog post “Sherlock Holmes knew chemistry.”

Subscribe to the OUPblog via email or RSS.
Subscribe to only science and medicine articles on the OUPblog via email or RSS.
Image credit: (1) From “The Adventure of the Dancing Men” Sherlock Holmes story. Public domain via Wikimedia Commons. (2) Sherlock Holmes in “The Adventure of the Missing Three-Quarter.” Illustration by Sidney Paget. Strand Magazine, 1904. Public domain via Wikimedia Commons.

The post Six methods of detection in Sherlock Holmes appeared first on OUPblog.

0 Comments on Six methods of detection in Sherlock Holmes as of 9/9/2013 7:31:00 AM
Add a Comment

View Next 25 Posts