JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Viewing: Blog Posts Tagged with: Mathematics, Most Recent at Top [Help]
Results 1 - 25 of 36
How to use this Page
You are viewing the most recent posts tagged with the words: Mathematics in the JacketFlap blog reader. What is a tag? Think of a tag as a keyword or category label. Tags can both help you find posts on JacketFlap.com as well as provide an easy way for you to "remember" and classify posts for later recall. Try adding a tag yourself by clicking "Add a tag" below a post's header. Scroll down through the list of Recent Posts in the left column and click on a post title that sounds interesting. You can view all posts from a specific blog by clicking the Blog name in the right column, or you can click a 'More Posts from this Blog' link in any individual post.
A large variety of complex systems in ecology, climate science, biomedicine, and engineering have been observed to exhibit so-called tipping points, where the dynamical state of the system abruptly changes. Typical examples are the rapid transition in lakes from clear to turbid conditions or the sudden extinction of species after a slightly change of environmental conditions. Data and models suggest that detectable warning signs may precede some, though clearly not all, of these drastic events. This view is also corroborated by recently developed abstract mathematical theory for systems, where processes evolve at different rates and are subject to internal and/or external stochastic perturbations.
One main idea to derive warning signs is to monitor the fluctuations of the dynamical process by calculating the variance of a suitable monitoring variable. When the tipping point is approached via a slowly-drifting parameter, the stabilizing effects of the system slowly diminish and the noisy fluctuations increase via certain well-defined scaling laws.
Based upon these observations, it is natural to ask, whether these scaling laws are also present in human social networks and can allow us to make predictions about future events. This is an exciting open problem, to which at present only highly speculative answers can be given. It is indeed to predict a priori unknown events in a social system. Therefore, as an initial step, we try to reduce the problem to a much simpler problem to understand whether the same mechanisms, which have been observed in the context of natural sciences and engineering, could also be present in sociological domains.
In our work, we provide a very first step towards tackling a substantially simpler question by focusing on a priori known events. We analyse a social media data set with a focus on classical variance and autocorrelation scaling law warning signs. In particular, we consider a few events, which are known to occur on a specific time of the year, e.g., Christmas, Halloween, and Thanksgiving. Then we consider time series of the frequency of Twitter hashtags related to the considered events a few weeks before the actual event, but excluding the event date itself and some time period before it.
Now suppose we do not know that a dramatic spike in the number of Twitter hashtags, such as #xmas or #thanksgiving, will occur on the actual event date. Are there signs of the same stochastic scaling laws observed in other dynamical systems visible some time before the event? The more fundamental question is: Are there similarities to known warning signs from other areas also present in social media data?
We answer this question affirmatively as we find that the a priori known events mentioned above are preceded by variance and autocorrelation growth (see Figure). Nevertheless, we are still very far from actually using social networks to predict the occurrence of many other drastic events. For example, it can also be shown that many spikes in Twitter activity are not predictable through variance and autocorrelation growth. Hence, a lot more research is needed to distinguish different dynamical processes that lead to large outburst of activity on social media.
The findings suggest that further investigations of dynamical processes in social media would be worthwhile. Currently, a main focus in the research on social networks lies on structural questions, such as: Who connects to whom? How many connections do we have on average? Who are the hubs in social media? However, if one takes dynamical processes on the network, as well as the changing dynamics of the network topology, into account, one may obtain a much clearer picture, how social systems compare and relate to classical problems in physics, chemistry, biology and engineering.
One of the highest points of the International Congress of Mathematicians, currently underway in Seoul, Korea, is the announcement of the Fields Medal prize winners. The prize is awarded every four years to up to four mathematicians under the age of 40, and is viewed as one of the highest honours a mathematician can receive.
This year sees the first ever female recipient of the Fields Medal, Maryam Mirzakhani, recognised for her highly original contributions to geometry and dynamical systems. Her work bridges several mathematic disciplines – hyperbolic geometry, complex analysis, topology, and dynamics – and influences them in return.
We’re absolutely delighted for Professor Mirzakhani, who serves on the editorial board for International Mathematics Research Notices. To celebrate the achievements of all of the winners, we’ve put together a reading list of free materials relating to their work and to fellow speakers at the International Congress of Mathematicians.
Noted by the International Mathematical Union as work contributing to Mirzakhani’s achievement, this paper investigates the dynamics of the earthquake flow defined by Thurston on the bundle PMg of geodesic measured laminations.
Manjul Bhargava joins Maryam Mirzakhani amongst this year’s winners of the Fields Medal. Here he uses Serre’s mass formula for totally ramified extensions to derive a mass formula that counts all étale algebra extentions of a local field F having a given degree n.
Several authors, some of whom speaking at the International Congress of Mathematicians, have considered whether the ultrapower and the relative commutant of a C*-algebra or II1 factor depend on the choice of the ultrafilter.
Wooley’s paper, as well as his talk at the congress, investigates sums of mixed powers involving two squares, two cubes, and various higher powers concentrating on situations inaccessible to the Hardy-Littlewood method.
When we use a computer, its performance seems to degrade progressively. This is not a mere impression. An old version of Firefox, the free Web browser, was infamous for its “memory leaks”: it would consume increasing amounts of memory to the detriment of other programs. Bugs in the software actually do slow down the system. We all know what the solution is: reboot. We restart the computer, the memory is reset, and the performance is restored, until the bugs slow it down again.
Philosophy is a bit like a computer with a memory leak. It starts well, dealing with significant and serious issues that matter to anyone. Yet, in time, its very success slows it down. Philosophy begins to care more about philosophers’ questions than philosophical ones, consuming increasing amount of intellectual attention. Scholasticism is the ultimate freezing of the system, the equivalent of Windows’ “blue screen of death”; so many resources are devoted to internal issues that no external input can be processed anymore, and the system stops. The world may be undergoing a revolution, but the philosophical discourse remains detached and utterly oblivious. Time to reboot the system.
Philosophical “rebooting” moments are rare. They are usually prompted by major transformations in the surrounding reality. Since the nineties, I have been arguing that we are witnessing one of those moments. It now seems obvious, even to the most conservative person, that we are experiencing a turning point in our history. The information revolution is profoundly changing every aspect of our lives, quickly and relentlessly. The list is known but worth recalling: education and entertainment, communication and commerce, love and hate, politics and conflicts, culture and health, … feel free to add your preferred topics; they are all transformed by technologies that have the recording and processing of information as their core functions. Meanwhile, philosophy is degrading into self-referential discussions on irrelevancies.
The result of a philosophical rebooting today can only be beneficial. Digital technologies are not just tools merely modifying how we deal with the world, like the wheel or the engine. They are above all formatting systems, which increasingly affect how we understand the world, how we relate to it, how we see ourselves, and how we interact with each other.
The ‘Fourth Revolution’ betrays what I believe to be one of the topics that deserves our full intellectual attention today. The idea is quite simple. Three scientific revolutions have had great impact on how we see ourselves. In changing our understanding of the external world they also modified our self-understanding. After the Copernican revolution, the heliocentric cosmology displaced the Earth and hence humanity from the centre of the universe. The Darwinian revolution showed that all species of life have evolved over time from common ancestors through natural selection, thus displacing humanity from the centre of the biological kingdom. And following Freud, we acknowledge nowadays that the mind is also unconscious. So we are not immobile, at the centre of the universe, we are not unnaturally separate and diverse from the rest of the animal kingdom, and we are very far from being minds entirely transparent to ourselves. One may easily question the value of this classic picture. After all, Freud was the first to interpret these three revolutions as part of a single process of reassessment of human nature and his perspective was blatantly self-serving. But replace Freud with cognitive science or neuroscience, and we can still find the framework useful to explain our strong impression that something very significant and profound has recently happened to our self-understanding.
Since the fifties, computer science and digital technologies have been changing our conception of who we are. In many respects, we are discovering that we are not standalone entities, but rather interconnected informational agents, sharing with other biological agents and engineered artefacts a global environment ultimately made of information, the infosphere. If we need a champion for the fourth revolution this should definitely be Alan Turing.
The fourth revolution offers a historical opportunity to rethink our exceptionalism in at least two ways. Our intelligent behaviour is confronted by the smart behaviour of engineered artefacts, which can be adaptively more successful in the infosphere. Our free behaviour is confronted by the predictability and manipulability of our choices, and by the development of artificial autonomy. Digital technologies sometimes seem to know more about our wishes than we do. We need philosophy to make sense of the radical changes brought about by the information revolution. And we need it to be at its best, for the difficulties we are facing are challenging. Clearly, we need to reboot philosophy now.
Subscribe to the OUPblog via email or RSS.
Subscribe to only philosophy articles on the OUPblog via email or RSS.
Image credit: Alan Turing Statue at Bletchley Park. By Ian Petticrew. CC-BY-SA-2.0 via Wikimedia Commons.
This year, 2012, marks the 325th anniversary of the first publication of the legendary Principia (Mathematical Principles of Natural Philosophy), the 500-page book in which Sir Isaac Newton presented the world with his theory of gravity. It was the first comprehensive scientific theory in history, and it’s withstood the test of time over the past three centuries.
Unfortunately, this superb legacy is often overshadowed, not just by Einstein’s achievement but also by Newton’s own secret obsession with Biblical prophecies and alchemy. Given these preoccupations, it’s reasonable to wonder if he was quite the modern scientific guru his legend suggests, but personally I’m all for celebrating him as one of the greatest geniuses ever. Although his private obsessions were excessive even for the seventeenth century, he was well aware that in eschewing metaphysical, alchemical, and mystical speculation in his Principia, he was creating a new way of thinking about the fundamental principles underlying the natural world. To paraphrase Newton himself, he changed the emphasis from metaphysics and mechanism to experiment and mathematical analogy. His method has proved astonishingly fruitful, but initially it was quite controversial.
He had developed his theory of gravity to explain the cause of the mysterious motion of the planets through the sky: in a nutshell, he derived a formula for the force needed to keep a planet moving in its observed elliptical orbit, and he connected this force with everyday gravity through the experimentally derived mathematics of falling motion. Ironically (in hindsight), some of his greatest peers, like Leibniz and Huygens, dismissed the theory of gravity as “mystical” because it was “too mathematical.” As far as they were concerned, the law of gravity may have been brilliant, but it didn’t explain how an invisible gravitational force could reach all the way from the sun to the earth without any apparent material mechanism. Consequently, they favoured the mainstream Cartesian “theory”, which held that the universe was filled with an invisible substance calledether, whose material nature was completely unknown, but which somehow formed into great swirling whirlpools that physically dragged the planets in their orbits.
The only evidence for this vortex “theory” was the physical fact of planetary motion, but this fact alone could lead to any number of causal hypotheses. By contrast, Newton explained the mystery of planetary motion in terms of a known physical phenomenon, gravity; he didn’t need to postulate the existence of fanciful ethereal whirlpools. As for the question of how gravity itself worked, Newton recognized this was beyond his scope — a challenge for posterity — but he knew that for the task at hand (explaining why the planets move) “it is enough that gravity really exists and acts according to the laws that we have set forth and is sufficient to explain all the motions of the heavenly bodies…”
What’s more, he found a way of testing his theory by using his formula for gravitational force to make quantitative predictions. For instance, he realized that comets were not random, unpredictable phenomena (which the superstitious had feared as fiery warnings from God), but small celestial bodies following well-defined orbits like the planets. His friend Halley famously used the theory of gravity to predict the date of return of the comet now named after him. As it turned out, Halley’s prediction was fairly good, although Clairaut — working half a century later but just before the predicted return of Halley’s comet — used more sophisticated mathematics to apply Newton’s laws to make an even more accurate prediction.
Clairaut’s calculations illustrate the fact that despite the phenomenal depth and breadth of Principia, it took a further century of effort by scores of mathematicians and physicists to build on Newton’s work and to create modern “Newtonian” physics in the form we know it today. But Newton had created the blueprint for this science, and its novelty can be seen from the fact that some of his most capable peers missed the point. After all, he had begun the radical process of transforming “natural philosophy” into theoretical physics — a transformation from traditional qualitative philosophical speculation about possible causes of physical phenomena, to a quantitative study of experimentally observed physical effects. (From this experimental study, mathematical propositions are deduced and then made general by induction, as he explained in Principia.)
Even the secular nature of Newton’s work was controversial (and under apparent pressure from critics, he did add a brief mention of God in an appendix to later editions of Principia). Although Leibniz was a brilliant philosopher (and he was also the co-inventor, with Newton, of calculus), one of his stated reasons for believing in the ether rather than the Newtonian vacuum was that God would show his omnipotence by creating something, like the ether, rather than leaving vast amounts of nothing. (At the quantum level, perhaps his conclusion, if not his reasoning, was right.) He also invoked God to reject Newton’s inspired (and correct) argument that gravitational interactions between the various planets themselves would eventually cause noticeable distortions in their orbits around the sun; Leibniz claimed God would have had the foresight to give the planets perfect, unchanging perpetual motion. But he was on much firmer ground when he questioned Newton’s (reluctant) assumption of absolute rather than relative motion, although it would take Einstein to come up with a relativistic theory of gravity.
Einstein’s theory is even more accurate than Newton’s, especially on a cosmic scale, but within its own terms — that is, describing the workings of our solar system (including, nowadays, the motion of our own satellites) — Newton’s law of gravity is accurate to within one part in ten million. As for his method of making scientific theories, it was so profound that it underlies all the theoretical physics that has followed over the past three centuries. It’s amazing: one of the most religious, most mystical men of his age put his personal beliefs aside and created the quintessential blueprint for our modern way of doing science in the most objective, detached way possible. Einstein agreed; he wrote a moving tribute in the London Times in 1919, shortly after astronomers had provided the first experimental confirmation of his theory of general relativity:
“Let no-one suppose, however, that the mighty work of Newton can really be superseded by [relativity] or any other theory. His great and lucid ideas will retain their unique significance for all time as the foundation of our modern conceptual structure in the sphere of [theoretical physics].”
This year marked the centenary of the birth of Alan Mathison Turing; among the many, many commemorative events that occurred during the Alan Turing Year were the reissues of two biographies of AMT. One was Andrew Hodges's extraordinary work Alan Turing: The Enigma. The other was Sara Turing's long-unavailable book about her son, simply titled [...]
Two contrasting experiences stick in mind from my first year at university.
First, I spent a lot of time in lectures that I did not understand. I don’t mean lectures in which I got the general gist but didn’t quite follow the technical details. I mean lectures in which I understood not one thing from the beginning to the end. I still went to all the lectures and wrote everything down – I was a dutiful sort of student – but this was hardly the ideal learning experience.
Second, at the end of the year, I was awarded first class marks. The best thing about this was that later that evening, a friend came up to me in the bar and said, “Hey Lara, I hear you got a first!” and I was rapidly surrounded by other friends offering enthusiastic congratulations. This was a revelation. I had attended the kind of school at which students who did well were derided rather than congratulated. I was delighted to find myself in a place where success was celebrated.
Looking back, I think that the interesting thing about these two experiences is the relationship between the two. How could I have done so well when I understood so little of so many lectures?
I don’t think that there was a problem with me. I didn’t come out at the very top, but obviously I had the ability and dedication to get to grips with the mathematics. Nor do I think that there was a problem with the lecturers. Like the vast majority of the mathematicians I have met since, my lecturers cared about their courses and put considerable effort into giving a logically coherent presentation. Not all were natural entertainers, but there was nothing fundamentally wrong with their teaching.
I now think that the problems were more subtle, and related to two issues in particular.
First, there was a communication gap: the lecturers and I did not understand mathematics in the same way. Mathematicians understand mathematics as a network of axioms, definitions, examples, algorithms, theorems, proofs, and applications. They present and explain these, hoping that students will appreciate the logic of the ideas and will think about the ways in which they can be combined. I didn’t really know how to learn effectively from lectures on abstract material, and research indicates that I was pretty typical in this respect.
Students arrive at university with a set of expectations about what it means to ‘do mathematics’ – about what kind of information teachers will provide and about what students are supposed to do with it. Some of these expectations work well at school but not at university. Many students need to learn, for instance, to treat definitions as stipulative rather than descriptive, to generate and check their own examples, to interpret logical language in a strict, mathematical way rather than a more flexible, context-influenced way, and to infer logical relationships within and across mathematical proofs. These things are expected, but often they are not explicitly taught.
My second problem was that I didn’t have very good study skills. I wasn’t terrible – I wasn’t lazy, or arrogant, or easily distracted, or unwilling to put in the hours. But I wasn’t very effective in deciding how to spend my study time. In fact, I don’t remember making many conscious decisions about it at all. I would try a question, find it difficult, stare out of the window, become worried, attempt to study some section of my lecture notes instead, fail at that too, and end up discouraged. Again, many students are like this. I have met a few who probably should have postponed university until they were ready to exercise some self-discipline, but most do want to learn.
What they lack is a set of strategies for managing their learning – for deciding how to distribute their time when no-one is checking what they’ve done from one class to the next, and for maintaining momentum when things get difficult. Many could improve their effectiveness by doing simple things like systematically prioritizing study tasks, and developing a routine in which they study particular subjects in particular gaps between lectures. Again, the responsibility for learning these skills lies primarily with the student.
Personally, I never got to a point where I understood every lecture. But I learned how to make sense of abstract material, I developed strategies for studying effectively, and I maintained my first class marks. What I would now say to current students is this: take charge. Find out what lecturers and tutors are expecting, and take opportunities to learn about good study habits. Students who do that should find, like I did, that undergraduate mathematics is challenging, but a pleasure to learn.
Subscribe to the OUPblog via email or RSS. Subscribe to only mathematics articles on the OUPblog via email or RSS.
Subscribe to only education articles on the OUPblog via email or RSS. Image credit: Screenshot of Oxford English Dictionary definition of mathematics, n., via OED Online. All rights reserved.
Ducklings in a Row by Renee Heiss illustrated by Matthew B. Holcomb Character Publishing 4 Star . Back Cover: When Mama Duck asks her ducklings to arrange themselves from One to Ten, the baby ducks learn much more than sequencing skills. In Ducklings in a Row, ten unique duckling personalities combine to gorm a humorous …
Nowadays it appears impossible to open a newspaper or switch on the television without hearing about “big data”. Big data, it sometimes seems, will provide answers to all the world’s problems. Management consulting company McKinsey, for example, promises “a tremendous wave of innovation, productivity, and growth … all driven by big data”.
An alien observer visiting the Earth might think it represents a major scientific breakthrough. Google Trends shows references to the phrase bobbing along at about one per week until 2011, at which point there began a dramatic, steep, and almost linear increase in references to the phrase. It’s as if no one had thought of it until 2011. Which is odd because data mining, the technology of extracting valuable, useful, or interesting information from large data sets, has been around for some 20 years. And statistics, which lies at the heart of all of this, has been around as a formal discipline for a century or more.
Or perhaps it’s not so odd. If you look back to the beginning of data mining, you find a very similar media enthusiasm for the advances it was going to bring, the breakthroughs in understanding, the sudden discoveries, the deep insights. In fact it almost looks as if we have been here before. All of this leads one to suspect that there’s less to the big data enthusiasm than meets the eye. That it’s not so much a sudden change in our technical abilities as a sudden media recognition of what data scientists, and especially statisticians, are capable.
Of course, I’m not saying that the increasing size of data sets does not lead to promising new opportunities – though I would question whether it’s the “large” that really matters as much as the novelty of the data sets. The tremendous economic impact of GPS data (estimated to be $150-270bn per year), retail transaction data, or genomic and bioinformatics data arise not from the size of these data sets, but from the fact that they provide new kinds of information. And while it’s true that a massive mountain of data needed to be explored to detect the Higgs boson, the core aspect was the nature of the data rather than its amount.
Moreover, if I’m honest, I also have to admit that it’s not solely statistics which leads to the extraction of value from these massive data sets. Often it’s a combination of statistical inferential methods (e.g. determining an accurate geographical location from satellite signals) along with data manipulation algorithms for search, matching, sorting and so on. How these two aspects are balanced depends on the particular application. Locating a shop which stocks that out of print book is less of an inferential statistical problem and more of a search issue. Determining the riskiness of a company seeking a loan owes little to search but much to statistics.
Diagram of Total Information Awareness system designed by the Information Awareness Office
Some time after the phrase “data mining” hit the media, it suffered a backlash. Predictably enough, much of this was based around privacy concerns. A paradigmatic illustration was the Total Information Awareness project in the United States. Its basic aim was to search for suspicious behaviour patterns within vast amounts of personal data, to identify individuals likely to commit crimes, especially terrorist offences. It included data on web browsing, credit card transactions, driving licences, court records, passport details, and so on. After concerns were raised, it was suspended in 2003 (though it is claimed that the software continued to be used by various agencies). As will be evident from recent events, concerns about the security agencies monitoring of the public continues.
Technology is amoral — neither intrinsically moral nor immoral. Morality lies in the hands of those who wield it. This is as true of big data technology as it is of nuclear technology and biotechnology. It is abundantly clear — if only from the examples we have already seen — that massive data sets do hold substantial promise for enhancing the well-being of mankind, but we must be aware of the risks. A suitable balance must be struck.
It’s also important to note that the mere existence of huge data files is of itself of no benefit to anyone. For these data sets to be beneficial, it’s necessary to be able to use the data to build models, to estimate effect sizes, to determine if an observed effect should be regarded as mere chance variation, to be sure it’s not a data quality issue, and so on. That is, statistical skills are critical to making use of the big data resources. In just the same way that vast underground oil reserves were useless without the technology to turn them into motive power, so the vast collections of data are useless without the technology to analyse them. Or, as I sometimes put it, people don’t want data, what they want are answers. And statistics provides the tools for finding those answers.
Subscribe to the OUPblog via email or RSS.
Subscribe to only mathematics articles on the OUPblog via email or RSS Image credit: Diagram of Total Information Awareness system designed by the Information Awareness Office. Public domain via Wikimedia Commons
Math is on my mind lately as I wrap up the Parallelogram series. (Yes, Dear Readers, Book 4 is coming! There are just so many words.) I, like my main character Audie in the series, enjoy quantum physics but do not enjoy the math. Or, to put it less charitably, cannot do the math.
But I can’t help wondering if I would have had a completely different attitude toward math in school if I’d had a teacher like this. Or at least seen a demonstration like this. Because there’s no doubt Arthur Benjamin makes math FUN. (Although no matter how fun it is, I still think there’s no way mere mortals could do what he does.)
Baseball fans love to compare the players of today to the players who came before, but one must wonder how great the margin of error in these comparisons is. Is there any way of knowing who the real baseball greats are, and whose legend should stand the test of time?
Let’s take Omar Vizquel as an example. So says Wikipedia, “Vizquel is considered one of baseball’s all-time best fielding shortstops.” It’s true, Vizquel “is considered” a great fielder. Of shortstops, he
-holds the highest career fielding percentage of those with a long career.
-has participated in more double plays (and his primary double play partner just entered the Hall of Fame)
-is third in career assists
-has played more games at shortstop than anyone in major league history.
On top of all that, Vizquel has received more Gold Gloves than any other shortstop except for Ozzie “Wizard of Oz” Smith. Indeed, writers have described Omar and Ozzie as the “graceful Fred Astaire” and “acrobatic Gene Kelly,” respectively, of shortstops.
Vizquel has something of a signature play—fielding ordinary grounders (not just bunts) with his bare hand and throwing in one motion. He was the starting shortstop for the most successful American League team of the 1990s, second only to the Yankees. He hasn’t been much of a hitter, even for a shortstop, so it’s not unreasonable to infer he must have been a great fielder to hang on as long as he has.
But, after all that, how do we really Vizquel actually is one of baseball’s all-time best fielding shortstops? With metrics.
Let’s start with the question: What is the job of a fielder? To help his team prevent runs. At shortstop, this mainly involves converting ground balls into outs and getting the second out on double plays—in other words, recording assists. (It is very rare that shortstops catch fly balls or pop ups that couldn’t be fielded by at least two and as many as five other fielders. Most of the differences in putout rates for shortstops reflect how much they ‘hog’ these easy chances, not how many marginal hits they help their teams prevent. And line drive putouts at short are mostly dumb-luck plays.)
It is not the job of a shortstop (or any fielder) to look “graceful” or make trick plays. It’s not even a fielder’s job to avoid errors. In fact, a fielder who makes ten more successful plays but also ten more errors has just the same value as the fielder who makes an average number of plays and errors, because an error is no worse than a play not made.
Any fielding metric for shortstop needs to estimate how many assists a shortstop generated above or below what an average shortstop would have, playing for the same team. My system uses some arithmetic and the statistical technique of “regression analysis,” resulting in what I call Defensive Regression Analysis, or DRA.
DRA estimates the number of assists the league average shortstop would have recorded in place of the shortstop you’re rating by starting with the average number of shortstop assists per team that year and adjusting that number up or down based on statistically significant relationships between shortstop assists and other defensive statistics of the player’s team that are
1. not influenced by the shortstop himself,
2. as little influenced by the fielding quality of his teammates as possible, and
3. independent (approximately) of each ot
A recent satirical essay in the Huffington Post reports that congressional Republicans are trying to legislate the value of pi. Fearing that the complexity of modern geometry is hurting America’s performance on international measures of mathematical knowledge, they have decreed that from now on pi shall be equal to three. It is a sad commentary on American culture that you must read slowly and carefully to be certain the essay is just satire.
It has been wisely observed that reality is that which, when you stop believing in it, doesn’t go away. Scientists are especially aware of this, since it is sometimes their sad duty to inform people of truths they would prefer not to accept. Evolution cannot be made to go away by folding you arms and shaking your head, and the planet is warming precipitously regardless of what certain business interests claim to believe. Likewise, the value of pi is what it is, no matter what a legislative body might think.
That value, of course, is found by dividing the circumference of a circle by its diameter. Except that if you take an actual circular object and apply your measuring devices to it you will obtain only a crude approximation to pi. The actual value is an irrational number, meaning that it is a decimal that goes on forever without repeating itself. One of my middle school math teachers once told me that it is just crazy for a number to behave in such a fashion, and that is why it is said to be irrational. Since I rather liked that explanation, you can imagine my disappointment at learning it was not correct.
In this context, the word “irrational” really just means “not a ratio.” More specifically, it is not a ratio of two integers. You see, if you divide one integer by another there are only two things that can happen. Either the process ends or it goes on forever by repeating a pattern. For example, if you divide one by four you get .25, while if you divide one by three you get .3333… . That these are the only possibilities can be proved with some elementary number theory, but I shall spare you the details of how that is done. That aside, our conclusion is that since pi never ends and never repeats, it cannot be written as one integer divided by another.
Which might make you wonder how anyone evaluated pi in the first place. If the number is defined geometrically, but we cannot hope to measure real circles with sufficient accuracy, then why do we constantly hear about computers evaluating its first umpteen million digits? The answer is that we are not forced to define pi in terms of circles. The number arises in other contexts, notably trigonometry. By coupling certain facts about right triangles with techniques drawn from calculus, you can express pi as the sum of a certain infinite series. That is, you can find a never-ending list of numbers that gets smaller and smaller and smaller, with the property that the more of the numbers you sum the better your approximation to pi. Very cool stuff.
Of course, I’m sure we all know that pi is a little bit larger than three. This means that any circle is just over three times larger around than it is across. The failure of most people to be able to visualize this leads to a classic bar bet. Take any tall, thin, drinking glass, the kind with a long stem, and ask the person sitting nearest you if its height is greater than its circumference. When he answers that it is, bet him that he is wrong. Optically, most such glasses appear to be much taller than they are fat, but unless your specimen is very tall and very thin you will win the bet every time. The circumference is more than three times larger than the diameter at the top of the glass. A vessel so proportioned that this length is nonetheless smal
With the rising popularity of Android (Google), iPhone, and iPads, I thought it would be a good idea to search for free math-related apps, starting with Android. Unfortunately, I was appalled that many of the popular apps collect too much information, namely, your unique phone id.
What's the big deal? As far as I can tell, your unique phone id is just like your Social Security number—it's not something that you give out to anyone who asks for it. Unfortunately, this is exactly the scenario that I kept finding in the Android market. Why does a flashcard app need the equivalent of your Social Security number? It seems a little fishy to me and I can't recommend those apps.
Happy Tau Day, the most exciting math holiday you’ve yet to discover! Today, June 28th is 6/28, which contains in order the first three digits of tau (τ), the rival of math’s most popular irrational number, pi (π).
In 2001, Bob Palais wrote an article for The Mathematical Investigator called ,“π is wrong!” In it, he insists that the choice of using π in our mathematical formulas for hundreds of years is no good. He argues that the use of τ would simplify many formulas and its derivation is much more intuitive. (Notice that the symbol resembles that for pi, but with one "leg" instead of two.)
The significance of our beloved irrational number π is that it is equal to the ratio of the circumference of any circle to its diameter--in notation, π = C/d. However, the most defining characteristic of a circle is not its diameter but its radius. A circle is defined as the collection of points on a plane that are exactly the same distance, its radius, from a point, its center. Palais argues that intuition should direct us to the use of a more elegant Circle Constant, tau, where τ is the ratio of the circumference of a circle to its radius--in notation, τ = C/r.
Self-described “notorious mathematical propagandist” Michael Hartl takes the argument even further in his now-famous “The Tau Manifesto,” which he published on Tau Day of 2010, exactly one year ago. He demonstrates with many adapted formulas that the factor of 2 is unnecessary if we incorporate it into the ratio itself. For instance, the periods of basic trigonometric functions f(x) = sin(x), and f(x) = cos(x), are in both cases 2π. Why not change them to tau instead? Palais and Hartl each list numerous other examples from calculus and physics, in which the factor of 2 is rendered obsolete by replacing 2π with τ.
The really intuitive part is revealed if you think of angle measure. How things are done now with π, a half turn of the circle is π radians, and a full turn is 2π radians. Should we adopt τ instead, τ radians would be a full turn, τ/2 radians a half turn, τ/4 radians a quarter turn, and so on.
There are, of course, instances where π appears un-doubled. For instance, the formula for area of a circle: A = πr2. Hartl shows, in a mathematically sophisticated way, that the replacement of π by τ even in this instance is the more sound choice, since it is analogous to similar formulas in physics.
An article in today’s BBC News paints the issue as a violent conflict, with pi detractors up in arms over a lifetime of educational betrayal, which seems to this mathematician something of a manufactured controversy. (I can imagine you'd be upset if you are the sort of mathematician that has memorized pi to the nth digit. If you are one of these folks, here's the start for your new parlor trick: reciting tau, 6.283185307...)
They say that there is no such thing as a stupid question. New York State mathematics teachers whose students took the Regents Exam in Algebra 2 and Trigonometry last month (June 2011) are likely to disagree. The test contained a controversial question that asked students to find the inverse of a non-invertible function. Here’s the problem in question:
The problem was in the 2-point, or short answer free response, portion of the exam, testing the learning standard that demands students “determine the inverse of a function and use composition to justify the result.” (A2.A.45) The wording of the question strongly implies that the inverse of the function does indeed exist. However, since the function given is not one-to-one, there is no inverse. Teachers got loud, complaining to representatives of the Board of Regents, the group that writes, edits, and distributes the exam. The Board responded with a memo called, “Scoring Clarification for Teachers,” which acknowledged several ways that students could interpret the question and demonstrate their understanding of invertibility of functions.
Was the response satisfactory? The Board's memo cites “variations in the use of [inverse] notation throughout New York State,” which seems to evade blame for a lousy question. A prominent math teacher blogger responded on his blog, “How could the test-makers not be aware of ‘variations in notation’? Also, notice how there is an asymmetric justification burden on a kid claiming (correctly) that the inverse does not exist.” A lousy question shakes the faith that teachers and students have in the standardized test as a valid assessment of student understanding. For instance, the same blogger concluded, “I have no confidence in New York State’s ability to create a good test of mathematics, at any level.”
It is my sincere hope that this controversy and the appearance of a misleading question will lead to both (a) more opportunities to explore the meaning of invertible functions and one-to-one functions, demanding students to be more savvy test-takers; and (b) increased scrutiny and more careful construction of New York’s Regents exams. In short, as educators, better instruction and better assessment should be our smart answer to this, or any, stupid question.
By Ian Stewart
Falling cats can turn over in mid-air. Well, most cats can. Our first cat, Seamus, didn’t have a clue. My wife, worried he might fall off a fence and hurt himself, tried to train him by holding him over a cushion and letting go. He enjoyed the game, but he never learned how to flip himself over.
This Day in World History - Each evening that weather permitted, Maria (pronounced Mah-RYE-uh) Mitchell mounted the stairs to the roof of her family’s Nantucket home to sweep the sky with a telescope looking for a comet. Mitchell—who had been taught mathematics and astronomy by her father—began the practice in 1836. Eleven years later, on October 1, 1847, her long labors finally paid off. When she saw the comet, she quickly summoned her father, who agreed with her conclusion.
Among mathematicians, it is always a happy moment when a long-standing problem is suddenly solved. The year 2012 started with such a moment, when an Irish mathematician named Gary McGuire announced a solution to the minimal-clue problem for Sudoku puzzles.
You have seen Sudoku puzzles, no doubt, since they are nowadays ubiquitous in newspapers and magazines. They look like this:
Your task is to fill in the vacant cells with the digits from 1-9 in such a way that each row, column and three by three block contains each digit exactly once. In a proper puzzle, the starting clues are such as to guarantee there is only one way of completing the square.
This particular puzzle has just seventeen starting clues. It had long been believed that seventeen was the minimum number for any proper puzzle. Mathematician Gordon Royle maintains an online database which currently contains close to fifty thousand puzzles with seventeen starting clues (in fact, the puzzle above is adapted from one of the puzzles in that list). However, despite extensive computer searching, no example of a puzzle with sixteen or fewer clues had ever been found.
The problem was that an exhaustive computer search seemed impossible. There were simply too many possibilities to consider. Even using the best modern hardware, and employing the most efficient search techniques known, hundreds of thousands of years would have been required.
Pure mathematics likewise provided little assistance. It is easy to see that seven clues must be insufficient. With seven starting clues there would be at least two digits that were not represented at the start of the puzzle. To be concrete, let us say that there were no 1s or 2s in the starting grid. Then, in any completion of the starting grid it would be possible simply to change all the 1s to 2s, and all the 2s to 1s, to produce a second valid solution to the puzzle. After making this observation, however, it is already unclear how to continue. Even a simple argument proving the insufficiency of eight clues has proven elusive.
McGuire’s solution requires a combination of mathematics and computer science. To reduce the time required for an exhaustive search he employed the idea of an “unavoidable set.” Consider the shaded cells in this Sudoku square:
Now imagine a starting puzzle having this square for a solution. Can you see why we would need to have at least one starting clue in one of those shaded cells? The reason is that if we did not, then we would be able to toggle the digits in those cells to produce a second solution to the same puzzle. In fact, this particular Sudoku square has a lot of similar unavoidable sets; in general some squares will have more than others, and of different types. Part of McGuire’s solution involved finding a large collection of certain types of unavoidable sets in every Sudoku square under consideration.
Finding these unavoidable sets permits a dramatic reduction in the size of the space that must be searched. Rather than searching through every sixteen-clue subset of a given Sudoku square, desperately looking for one that is actually a proper puzzle, we need only consider sets of sixteen starting clues containing at l
Halloween has always been a fun time of year for me. I love dressing up in costume. It's very much like creating the characters in my stories, only in costume I become a character for real. In fact, I bring some costume pieces along with me when I do school visits and help the students devise new and interesting characters.
So today's post is a collection of interesting Halloween(ish) news I've unearthed of late.
Of course, you know I love libraries, so how cool is a haunted one? That's right, in Deep River, Connecticut, the public library (a former home built in 1881 by a local businessman) has not just one ghost but many. Wouldn't that make for some interesting storytimes?
The American Library Association's GREAT WEBSITES FOR KIDS isn't too scary, but there are a frightfully wonderful number of cool places to visit there. Take for example this website on BATS--the kind that fly in the night. That's kind of spooky.
Or try National Geographic's CAT site. Have you ever seen a cat skeleton?
So I admit, Math was always a little scary for me. That's why I've included this site here called COOL MATH--An Amusement Park of Math and More. Check it out for puzzles, games, and Bubba Man in his awesome Halloween costume.
If all these Halloween antics make you hungry, stop by the For Kids section here on my site and find the recipe for SPIDER SNACKS. Then you can munch along as you do the HALLOWEEN CROSSWORD, lurking just around the corner.
Do you remember with fondness the thrill of mathematical discovery? Your first geometry proof, using pi to calculate areas of circles, the imaginary number i, Pascal’s Triangle, and the Fibonacci Sequence may be distant memories, but the concepts still intrigue you. If you’re a math geek like I am, reading the Ponderables Illustrated History of Numbers is the perfect way to capture the joy you once felt.
Mathematical principles have not always been known. They developed throughout the ages by some of the masterminds of the sciences. In this illustrated history, we learn who was responsible for significant discoveries, and how they came about. One hundred “Ponderables” are presented for our enjoyment and enlightenment.
I have to admit, I am very biased in reviewing this book. I have always loved mathematics. Its perfect logic, symmetry, and order have been constant companions for me. And if you also love this exact science, you’ll love this book. I felt like I had taken a trip back to my favorite high school math classes. This is the perfect gift for any serious math student or your favorite math teacher. And if math isn’t your favorite subject, look for Ponderables in Chemistry, Space, Physics, Philosophy, and Computing.
29 November 2012 is the 140th anniversary of the death of mathematician Mary Somerville, the nineteenth century’s “Queen of Science”. Several years after her death, Oxford University’s Somerville College was named in her honor — a poignant tribute because Mary Somerville had been completely self-taught. In 1868, when she was 87, she had signed J. S. Mill’s (unsuccessful) petition for female suffrage, but I think she’d be astonished that we’re still debating “the woman question” in science. Physics, in particular — a subject she loved, especially mathematical physics — is still a very male-dominated discipline, and men as well as women are concerned about it.
Of course, science today is far more complex than it was in Somerville’s time, and for the past forty years feminist critics have been wondering if it’s the kind of science that women actually want; physics, in particular, has improved the lives of millions of people over the past 300 years, but it’s also created technologies and weapons that have caused massive human, social and environmental destruction. So I’d like to revisit an old debate: are science’s obstacles for women simply a matter of managing its applications in a more “female-friendly” way, or is there something about its exclusively male origins that has made science itself sexist?
To manage science in a more female-friendly way, it would be interesting to know if there’s any substance behind gender stereotypes such as that women prefer to solve immediate human problems, and are less interested than men in detached, increasingly expensive fundamental research, and in military and technological applications. Either way, though, it’s self-evident that women should have more say in how science is applied and funded, which means it’s important to have more women in decision-making positions — something we’re still far from achieving.
But could the scientific paradigm itself be alienating to women? Mary Somerville didn’t think so, but it’s often argued (most recently by some eco-feminist and post-colonial critics) that the seventeenth-century Scientific Revolution, which formed the template for modern science, was constructed by European men, and that consequently, the scientific method reflects a white, male way of thinking that inherently preferences white men’s interests and abilities over those of women and non-Westerners. It’s a problematic argument, but justification for it has included an important critique of reductionism — namely, that Western male experimental scientists have traditionally studied physical systems, plants, and even human bodies by dissecting them, studying their components separately and losing sight of the whole system or organism.
The limits of the reductionist philosophy were famously highlighted in biologist Rachel Carson’s book, Silent Spring, which showed that the post-War boom in chemical pest control didn’t take account of the whole food chain, of which insects are merely a part. Other dramatic illustrations are climate change, and medical disasters like the thalidomide tragedy: clearly, it’s no longer enough to focus selectively on specific problems such as the action of a drug on a particular symptom, or the local effectiveness of specific technologies; instead, scientists must consider the effect of a drug or medical procedure on the whole person, whilst new technological inventions shouldn’t be separated from their wider social and environmental ramifications.
In its proper place, however, reductionism in basic scientific research is important. (The recent infamous comment by American Republican Senate nominee Todd Akin — that women can “shut down” their bodies during a “legitimate rape”, in order not to become pregnant — illustrates the need for a basic understanding of how the various parts of the human body work.) I’m not sure if this kind of reductionism is a particularly male or particularly Western way of thinking, but either way there’s much more to the scientific method than this; it’s about developing testable hypotheses from observations (reductionist or holistic), and then testing those hypotheses in as objective a way as possible. The key thing in observing the world is curiosity, and this is a human trait, discernible in all children, regardless of race or gender. Of course, girls have traditionally faced more cultural restraints than boys, so perhaps we still need to encourage girls to be actively curious about the world around them. (For instance, it’s often suggested that women prefer biology to physics because they want to help people — and yet, many of the recent successes in medical and biological science would have been impossible without the technology provided by fundamental, curiosity-driven physics.)
Like Mary Somerville, I think the scientific method has universal appeal, but I also think feminist and other critics are right to question its patriarchal and capitalist origins. Although science at its best is value-free, it’s part of the broader community, whose values are absorbed by individual scientists. So much so that Yale researchers Moss-Racusin et al recently uncovered evidence that many scientists themselves, male and female, have an unconscious sexist bias. In their widely reported study, participants judged the same job application (for a lab manager position) to be less competent if it had a (randomly assigned) female name than if it had a male name.
In Mary Somerville’s day, such bias was overt, and it had the authority of science itself: women’s smaller brain size was considered sufficient to “prove” female intellectual inferiority. It was bad science, and it shows how patriarchal perceptions can skew the interpretation not just of women’s competence, but also of scientific data itself. (Without proper vigilance, this kind of subjectivity can slip through the safeguards of the scientific method because of other prejudices, too, such as racism, or even the agendas of funding bodies.) Of course, acknowledging the existence of patriarchal values in society isn’t about hating men or assuming men hate women. Mary Somerville met with “the utmost kindness” from individual scientific men, but that didn’t stop many of them from seeing her as the exception that proved the male-created rule of female inferiority. After all, it takes analysis and courage to step outside a long-accepted norm. And so, the “woman question” is still with us — but in trying to resolve it, we might not only find ways to remove existing gender biases, but also broaden the conversation about what sort of science we all want in the twenty-first century.
Three words to sum up Alan Turing? Humour. He had an impish, irreverent and infectious sense of humour. Courage. Isolation. He loved to work alone. Reading his scientific papers, it is almost as though the rest of the world — the busy community of human minds working away on the same or related problems — simply did not exist. Turing was determined to do it his way. Three more words? A patriot. Unconventional — he was uncompromisingly unconventional, and he didn’t much care what other people thought about his unusual methods. A genius. Turing’s brilliant mind was sparsely furnished, though. He was a Spartan in all things, inner and outer, and had no time for pleasing décor, soft furnishings, superfluous embellishment, or unnecessary words. To him what mattered was the truth. Everything else was mere froth. He succeeded where a better furnished, wordier, more ornate mind might have failed. Alan Turing changed the world.
What would it have been like to meet him? Turing was tallish (5 feet 10 inches) and broadly built. He looked strong and fit. You might have mistaken his age, as he always seemed younger than he was. He was good looking, but strange. If you came across him at a party you would notice him all right. In fact you might turn round and say “Who on earth is that?” It wasn’t just his shabby clothes or dirty fingernails. It was the whole package. Part of it was the unusual noise he made. This has often been described as a stammer, but it wasn’t. It was his way of preventing people from interrupting him, while he thought out what he was trying to say. Ah – Ah – Ah – Ah – Ah. He did it loudly.
If you crossed the room to talk to him, you’d probably find him gauche and rather reserved. He was decidedly lah-di-dah, but the reserve wasn’t standoffishness. He was a man of few words, shy. Polite small talk did not come easily to him. He might if you were lucky smile engagingly, his blue eyes twinkling, and come out with something quirky that would make you laugh. If conversation developed you’d probably find him vivid and funny. He might ask you, in his rather high-pitched voice, whether you think a computer could ever enjoy strawberries and cream, or could make you fall in love with it. Or he might ask if you can say why a face is reversed left to right in a mirror but not top to bottom.
Once you got to know him Turing was fun — cheerful, lively, stimulating, comic, brimming with boyish enthusiasm. His raucous crow-like laugh pealed out boisterously. But he was also a loner. “Turing was always by himself,” said codebreaker Jerry Roberts: “He didn’t seem to talk to people a lot, although with his own circle he was sociable enough.” Like everyone else Turing craved affection and company, but he never seemed to quite fit in anywhere. He was bothered by his own social strangeness — although, like his hair, it was a force of nature he could do little about. Occasionally he could be very rude. If he thought that someone wasn’t listening to him with sufficient attention he would simply walk away. Turing was the sort of man who, usually unintentionally, ruffled people’s feathers — especially pompous people, people in authority, and scientific poseurs. He was moody too. His assistant at the National Physical Laboratory, Jim Wilkinson, recalled with amusement that there were days when it was best just to keep out of Turing’s way. Beneath the cranky, craggy, irreverent exterior there was an unworldly innocence though, as well as sensitivity and modesty.
Turing died at the age of only 41. His ideas lived on, however, and at the turn of the millennium Time magazine listed him among the twentieth century’s 100 greatest minds, alongside the Wright brothers, Albert Einstein, DNA busters Crick and Watson, and the discoverer of penicillin, Alexander Fleming. Turing’s achievements during his short life were legion. Best known as the man who broke some of Germany’s most secret codes during the war of 1939-45, Turing was also the father of the modern computer. Today, all who click, tap or touch to open are familiar with the impact of his ideas. To Turing we owe the brilliant innovation of storing applications, and all the other programs necessary for computers to do our bidding, inside the computer’s memory, ready to be opened when we wish. We take for granted that we use the same slab of hardware to shop, manage our finances, type our memoirs, play our favourite music and videos, and send instant messages across the street or around the world. Like many great ideas this one now seems as obvious as the wheel and the arch, but with this single invention — the stored-program universal computer — Turing changed the way we live. His universal machine caught on like wildfire; today personal computer sales hover around the million a day mark. In less than four decades, Turing’s ideas transported us from an era where ‘computer’ was the term for a human clerk who did the sums in the back office of an insurance company or science lab, into a world where many young people have never known life without the Internet.
Unfortunately, I did not (yet) add another math problem. In the interest of moving the story along, I don't feel that a math problem is appropriate at this stage. If you recall from a previous post, there are at least two ways of writing a "math story":
Dramatize the solution of a math problem. The process of solving a challenging problem is a story: the problem grabs your attention, you struggle to analyze the problem, you try various dead ends. Finally, the solution hits you. It's so obvious! How did you miss it before? That's a story.
Take a "normal" story and add math. This is similar to the series Numbers.
While rocking out to Patti Smith, in celebration of her victory winning the National Book Award, I rediscovered her tribute, “Radio Baghdad.” The song celebrates the Iraqi city’s rich cultural and intellectual history, and as a refrain she specifically mentions its involvement in the invention of zero: “We created the zero/But we mean nothing to you.”
Smith honors Baghdad’s intellectual contribution to the establishment of zero as a number. Zero deserves her praise for its usefulness as a placeholder (as in the number 306), for its role as the additive identity element (if you add zero to any number, you get that number—in symbols, n + 0 = n for any number n), and for its contribution to the development of calculus. As the late writer David Foster Wallace elegantly claimed, “The invention of calculus was shocking because for a long time it had simply been presumed that you couldn't divide by zero.” Zero is a game-changer, a distinct value, and the barrier between positive and negative.
The richly informative book 100 Greatest Science Inventions of All Timetells the story of Al-Khwarizmi. In 810 A.D., this famous Baghdad mathematician convinced a group of fellow scholars that zero must be a number by demonstrating that zero behaves like a number when subject to common operations. Not only did Al-Khwarizmi thus effectively demonstrate zero as a number, but he also established himself as the founder of algebra. I love this story because I think it eloquently demonstrates the following dispositi
Add a Comment
One of the problems we have when writing problems and activities is creating good math art. While hand-drawn art is always an option, in the longer run, it is not that efficient. (For example, what happens if you made a mistake or change your mind?)
Unfortunately, creating good math art with software is often complex and tedious. By “good math art,” I mean art that is both mathematically accurate and styled exactly the way you want it. With most software (for example Mathematica), you often have to choose between these two.
Inkscape is a free, open-source application for creating line art. At first glance, it doesn’t appear that it’s suited for creating math art. But the hidden gems are found in the Extensions menu.
Open the Extensions menu and click on Render. There you’ll find options for drawing:
platonic solids (3D Polyhedra)
grids (Cartesian Grid and Grid)
graphs of functions (Function Plotter)
triangles drawn to scale (Triangle)
altitudes and angle bisectors (Draw From Triangle)
Is Inkscape easy to use? Yes. Can you style figures to your heart’s content? Yes. Can you create mathematically accurate art? Yes. It looks like we found what we were looking for.
The figures below were created in Inkscapein about 3 minutes:
Émilie du Châtelet, wrote Voltaire, ‘was a great man whose only fault was being a woman.’ Du Châtelet has paid the penalty for being a woman twice over. During her life, she was denied the educational opportunities and freedom that she craved. ‘Judge me for my own merits,’ she protested: ‘do not look upon me as a mere appendage to this great general or that renowned scholar’ – but since her death, she has been demoted to subsidiary status as Voltaire’s mistress and Isaac Newton’s translator.
Too often moulded into hackneyed stereotypes – the learned eccentric, the flamboyant lover, the devoted mother – du Châtelet deserves more realistic appraisals as a talented yet fallible woman trapped between overt discrimination and inner doubts about her worth. ‘I am in my own right a whole person,’ she insisted. I hope she would appreciate how I see her …
Émilie du Châtelet (1706-49) was tall and beautiful. Many intellectual women would object to an account starting with their looks, but du Châtelet took great care with her appearance. She spent a fortune on clothes and jewellery, acquiring the money from her husband, a succession of lovers, and her own skills at the gambling table (being mathematically gifted can bring unexpected rewards.) She brought the same intensity to her scientific work, plunging her hands in ice-cold water to keep herself awake as she wrote through the night. This whole-hearted enthusiasm for every activity she undertook explains why I admire her so much. The major goal of life, she believed, was to be happy – and for her that meant indulging but also balancing her passions for food, sex and learning.
Born into a wealthy family, du Châtelet benefited from an enlightened father who left her free to browse in his library and hired tutors to give her lessons more appropriate for boys than for marriageable girls. By the time she was twelve, du Châtelet could speak six languages, but it was not until her late twenties that she started to immerse herself in mathematics and Newtonian philosophy. By then, she was married to an elderly army officer, had two surviving children, and was developing intimate friendships with several clever young men who helped her acquire the education she was not allowed to gain at university.
When Voltaire’s radical politics provoked a warrant for his arrest, she concealed him in her husband’s run-down estate at Cirey and returned to Paris to restore his reputation. Over the next year, she oscillated between rural seclusion with Voltaire and partying in Paris, but after some prompting, she eventually made her choice and stuck to it. For fifteen years, they lived together at Cirey, happily embroiled in a private world of intense intellectual endeavour laced with romance, living in separate apartments linked by a secret passage and visited from time to time by her accommodating husband.
For decades, French scholars had been reluctant to abandon the ideas of their own national hero, René Descartes, and instead adopt those of his English rival, Newton. They are said to have been converted by a small book that appeared in 1738: Elements of Newtonian Philosophy. The only name on the title-page is Voltaire’s, but it is clear that this was a collaborative venture in which du Châtelet played a major role: as Voltaire to