JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Viewing: Blog Posts Tagged with: Mathematics, Most Recent at Top [Help]
Results 1 - 25 of 41
How to use this Page
You are viewing the most recent posts tagged with the words: Mathematics in the JacketFlap blog reader. What is a tag? Think of a tag as a keyword or category label. Tags can both help you find posts on JacketFlap.com as well as provide an easy way for you to "remember" and classify posts for later recall. Try adding a tag yourself by clicking "Add a tag" below a post's header. Scroll down through the list of Recent Posts in the left column and click on a post title that sounds interesting. You can view all posts from a specific blog by clicking the Blog name in the right column, or you can click a 'More Posts from this Blog' link in any individual post.
Alan Mathison Turing (1912-1954) was a mathematician and computer scientist, remembered for his revolutionary Automatic Computing Engine, on which the first personal computer was based, and his crucial role in breaking the ENIGMA code during the Second World War. He continues to be regarded as one of the greatest scientists of the 20th century.
We live in an age that Turing both predicted and defined. His life and achievements are starting to be celebrated in popular culture, largely with the help of the newly released film The Imitation Game, starring Benedict Cumberbatch as Turing and Keira Knightley as Joan Clarke. We’re proud to publish some of Turing’s own work in mathematics, computing, and artificial intelligence, as well as numerous explorations of his life and work. Use our interactive Enigma Machine below to learn more about Turing’s extraordinary achievements.
Image credits: (1) Bletchley Park Bombe by Antoine Taveneaux. CC-BY-SA-3.0 via Wikimedia Commons. (2) Alan Turing Aged 16, Unknown Artist. Public domain via Wikimedia Commons. (3) Good question by Garrett Coakley. CC-BY-SA 2.0 via Flickr.
Are you worried about catching the flu, or perhaps even Ebola? Just how worried should you be? Well, that depends on how fast a disease will spread over social and transportation networks, so it’s obviously important to obtain good estimates of the speed of disease transmission and to figure out good containment strategies to combat disease spread.
Diseases, rumors, memes, and other information all spread over networks. A lot of research has explored the effects of network structure on such spreading. Unfortunately, most of this research has a major issue: it considers networks that are not realistic enough, and this can lead to incorrect predictions of transmission speeds, which people are most important in a network, and so on. So how does one address this problem?
Traditionally, most studies of propagation on networks assume a very simple network structure that is static and only includes one type of connection between people. By contrast, real networks change in time — one contacts different people during weekdays and on weekends, one (hopefully) stays home when one is sick, new University students arrive from all parts of the world every autumn to settle into new cities. They also include multiple types of social ties (Facebook, Twitter, and – gasp – even face-to-face friendships), multiple modes of transportation, and so on. That is, we consume and communicate information through all sorts of channels. To consider a network with only one type of social tie ignores these facts and can potentially lead to incorrect predictions of which memes go viral and how fast information spreads. It also fails to allow differentiation between people who are important in one medium from people who are important in a different medium (or across multiple media). In fact, most real networks include a far richer “multilayer” structure. Collapsing such structures to obtain and then study a simpler network representation can yield incorrect answers for how fast diseases or ideas spread, the robustness level of infrastructures, how long it takes for interaction oscillators to synchronize, and more.
Recently, an increasingly large number of researchers are studying mathematical objects called “multilayer networks”. These generalize ordinary networks and allow one to incorporate time-dependence, multiple modes of connection, and other complexities. Work on multilayer networks dates back many decades in fields like sociology and engineering, and of course it is well-known that networks don’t exist in isolation but rather are coupled to other networks. The last few years have seen a rapid explosion of new theoretical tools to study multilayer networks.
And what types of things do researchers need to figure out? For one thing, it is known that multilayer structures induce correlations that are invisible if one collapses multilayer networks into simpler representations, so it is essential to figure out when and by how much such correlations increase or decrease the propagation of diseases and information, how they change the ability of oscillators to synchronize, and so on. From the standpoint of theory, it is necessary to develop better methods to measure multilayer structures, as a large majority of the tools that have been used thus far to study multilayer networks are mostly just more complicated versions of existing diagnostic and models. We need to do better. It is also necessary to systematically examine the effects of multilayer structures, such as correlations between different layers (e.g., perhaps a person who is important for the social network that is encapsulated in one layer also tends to be important in other layers?), on different types of dynamical processes. In these efforts, it is crucial to consider not only simplistic (“toy”) models — as in most of the work on multilayer networks thus far — but to move the field towards the examination of ever more realistic and diverse models and to estimate the parameters of these models from empirical data. As our review article illustrates, multilayer networks are both exciting and important to study, but the increasingly large community that is studying them still has a long way to go. We hope that our article will help steer these efforts, which promise to be very fruitful.
If a “revolution” in our field or area of knowledge was ongoing, would we feel it and recognize it? And if so, how?
I think a methodological “revolution” is probably going on in the science of epidemiology, but I’m not totally sure. Of course, in science not being sure is part of our normal state. And we mostly like it. I had the feeling that a revolution was ongoing in epidemiology many times. While reading scientific articles, for example. And I saw signs of it, which I think are clear, when reading the latest draft of the forthcoming book Causal Inference by M.A. Hernán and J.M. Robins from Harvard (Chapman & Hall / CRC, 2015). I think the “revolution” — or should we just call it a “renewal”? — is deeply changing how epidemiological and clinical research is conceived, how causal inferences are made, and how we assess the validity and relevance of epidemiological findings. I suspect it may be having an immense impact on the production of scientific evidence in the health, life, and social sciences. If this were so, then the impact would also be large on most policies, programs, services, and products in which such evidence is used. And it would be affecting thousands of institutions, organizations and companies, millions of people.
One example: at present, in clinical and epidemiological research, every week “paradoxes” are being deconstructed. Apparent paradoxes that have long been observed, and whose causal interpretation was at best dubious, are now shown to have little or no causal significance. For example, while obesity is a well-established risk factor for type 2 diabetes (T2D), among people who already developed T2D the obese fare better than T2D individuals with normal weight. Obese diabetics appear to survive longer and to have a milder clinical course than non-obese diabetics. But it is now being shown that the observation lacks causal significance. (Yes, indeed, an observation may be real and yet lack causal meaning.) The demonstration comes from physicians, epidemiologists, and mathematicians like Robins, Hernán, and colleagues as diverse as S. Greenland, J. Pearl, A. Wilcox, C. Weinberg, S. Hernández-Díaz, N. Pearce, C. Poole, T. Lash , J. Ioannidis, P. Rosenbaum, D. Lawlor, J. Vandenbroucke, G. Davey Smith, T. VanderWeele, or E. Tchetgen, among others. They are building methodological knowledge upon knowledge and methods generated by graph theory, computer science, or artificial intelligence. Perhaps one way to explain the main reason to argue that observations as the mentioned “obesity paradox” lack causal significance, is that “conditioning on a collider” (in our example, focusing only on individuals who developed T2D) creates a spurious association between obesity and survival.
The “revolution” is partly founded on complex mathematics, and concepts as “counterfactuals,” as well as on attractive “causal diagrams” like Directed Acyclic Graphs (DAGs). Causal diagrams are a simple way to encode our subject-matter knowledge, and our assumptions, about the qualitative causal structure of a problem. Causal diagrams also encode information about potential associations between the variables in the causal network. DAGs must be drawn following rules much more strict than the informal, heuristic graphs that we all use intuitively. Amazingly, but not surprisingly, the new approaches provide insights that are beyond most methods in current use. In particular, the new methods go far deeper and beyond the methods of “modern epidemiology,” a methodological, conceptual, and partly ideological current whose main eclosion took place in the 1980s lead by statisticians and epidemiologists as O. Miettinen, B. MacMahon, K. Rothman, S. Greenland, S. Lemeshow, D. Hosmer, P. Armitage, J. Fleiss, D. Clayton, M. Susser, D. Rubin, G. Guyatt, D. Altman, J. Kalbfleisch, R. Prentice, N. Breslow, N. Day, D. Kleinbaum, and others.
We live exciting days of paradox deconstruction. It is probably part of a wider cultural phenomenon, if you think of the “deconstruction of the Spanish omelette” authored by Ferran Adrià when he was the world-famous chef at the elBulli restaurant. Yes, just kidding.
Right now I cannot find a better or easier way to document the possible “revolution” in epidemiological and clinical research. Worse, I cannot find a firm way to assess whether my impressions are true. No doubt this is partly due to my ignorance in the social sciences. Actually, I don’t know much about social studies of science, epistemic communities, or knowledge construction. Maybe this is why I claimed that a sociology of epidemiology is much needed. A sociology of epidemiology would apply the scientific principles and methods of sociology to the science, discipline, and profession of epidemiology in order to improve understanding of the wider social causes and consequences of epidemiologists’ professional and scientific organization, patterns of practice, ideas, knowledge, and cultures (e.g., institutional arrangements, academic norms, scientific discourses, defense of identity, and epistemic authority). It could also address the patterns of interaction of epidemiologists with other branches of science and professions (e.g. clinical medicine, public health, the other health, life, and social sciences), and with social agents, organizations, and systems (e.g. the economic, political, and legal systems). I believe the tradition of sociology in epidemiology is rich, while the sociology of epidemiology is virtually uncharted (in the sense of not mapped neither surveyed) and unchartered (i.e. not furnished with a charter or constitution).
Another way I can suggest to look at what may be happening with clinical and epidemiological research methods is to read the changes that we are witnessing in the definitions of basic concepts as risk, rate, risk ratio, attributable fraction, bias, selection bias, confounding, residual confounding, interaction, cumulative and density sampling, open population, test hypothesis, null hypothesis, causal null, causal inference, Berkson’s bias, Simpson’s paradox, frequentist statistics, generalizability, representativeness, missing data, standardization, or overadjustment. The possible existence of a “revolution” might also be assessed in recent and new terms as collider, M-bias, causal diagram, backdoor (biasing path), instrumental variable, negative controls, inverse probability weighting, identifiability, transportability, positivity, ignorability, collapsibility, exchangeable, g-estimation, marginal structural models, risk set, immortal time bias, Mendelian randomization, nonmonotonic, counterfactual outcome, potential outcome, sample space, or false discovery rate.
You may say: “And what about textbooks? Are they changing dramatically? Has one changed the rules?” Well, the new generation of textbooks is just emerging, and very few people have yet read them. Two good examples are the already mentioned text by Hernán and Robins, and the soon to be published by T. VanderWeele, Explanation in causal inference: Methods for mediation and interaction (Oxford University Press, 2015). Clues can also be found in widely used textbooks by K. Rothman et al. (Modern Epidemiology, Lippincott-Raven, 2008), M. Szklo and J Nieto (Epidemiology: Beyond the Basics, Jones & Bartlett, 2014), or L. Gordis (Epidemiology, Elsevier, 2009).
Finally, another good way to assess what might be changing is to read what gets published in top journals as Epidemiology, the International Journal of Epidemiology, the American Journal of Epidemiology, or the Journal of Clinical Epidemiology. Pick up any issue of the main epidemiologic journals and you will find several examples of what I suspect is going on. If you feel like it, look for the DAGs. I recently saw a tweet saying “A DAG in The Lancet!”. It was a surprise: major clinical journals are lagging behind. But they will soon follow and adopt the new methods: the clinical relevance of the latter is huge. Or is it not such a big deal? If no “revolution” is going on, how are we to know?
Why do we teach students how to prove things we all know already, such as 0.9999••• =1?
Partly, of course, so they develop thinking skills to use on questions whose truth-status they won’t know in advance. Another part, however, concerns the dialogue nature of proof: a proof must be not only correct, but also persuasive: and persuasiveness is not objective and absolute, it’s a two-body problem. Not only to tango does one need two.
The statements — (1) ice floats on water, (2) ice is less dense than water — are widely acknowledged as facts and, usually, as interchangeable facts. But although rooted in everyday experience, they are not that experience. We have firstly represented stuffs of experience by sounds English speakers use to stand for them, then represented these sounds by word-processor symbols that, by common agreement, stand for them. Two steps away from reality already! This is what humans do: we invent symbols for perceived realities and, eventually, evolve procedures for manipulating them in ways that mirror how their real-world origins behave. Virtually no communication between two persons, and possibly not much internal dialogue within one mind, can proceed without this. Man is a symbol-using animal.
Statement (1) counts as fact because folk living in cooler climates have directly observed it throughout history (and conflicting evidence is lacking). Statement (2) is factual in a significantly different sense, arising by further abstraction from (1) and from a million similar experiential observations. Partly to explain (1) and its many cousins, we have conceived ideas like mass, volume, ratio of mass to volume, and explored for generations towards the conclusion that mass-to-volume works out the same for similar materials under similar conditions, and that the comparison of mass-to-volume ratios predicts which materials will float upon others.
Statement (3): 19 is a prime number. In what sense is this a fact? Its roots are deep in direct experience: the hunter-gatherer wishing to share nineteen apples equally with his two brothers or his three sons or his five children must have discovered that he couldn’t without extending his circle of acquaintance so far that each got only one, long before he had a name for what we call ‘nineteen’. But (3) is many steps away from the experience where it is grounded. It involves conceptualisation of numerical measurements of sets one encounters, and millennia of thought to acquire symbols for these and codify procedures for manipulating them in ways that mirror how reality functions. We’ve done this so successfully that it’s easy to forget how far from the tangibles of experience they stand.
Statement (4): √2 is not exactly the ratio of two whole numbers. Most first-year mathematics students know this. But by this stage of abstraction, separating its fact-ness from its demonstration is impossible: the property of being exactly a fraction is not detectable by physical experience. It is a property of how we abstracted and systematised the numbers that proved useful in modelling reality, not of our hands-on experience of reality. The reason we regard √2’s irrationality as factual is precisely because we can give a demonstration within an accepted logical framework.
What then about recurring decimals? For persuasive argument, first ascertain the distance from reality at which the question arises: not, in this case, the rarified atmosphere of undergraduate mathematics but the primary school classroom. Once a child has learned rituals for dividing whole numbers and the convenience of decimal notation, she will try to divide, say, 2 by 3 and will hit a problem. The decimal representation of the answer does not cease to spew out digits of lesser and lesser significance no matter how long she keeps turning the handle. What should we reply when she asks whether zero point infinitely many 6s is or is not two thirds, or even — as a thoughtful child should — whether zero point infinitely many 6s is a legitimate symbol at all?
The answer must be tailored to the questioner’s needs, but the natural way forward — though it took us centuries to make it logically watertight! — is the nineteenth-century definition of sum of an infinite series. For the primary school kid it may suffice to say that, by writing down enough 6s, we’d get as close to 2/3 as we’d need for any practical purpose. For differential calculus we’d need something better, and for model-theoretic discourse involving infinitesimals something better again. Yet the underpinning mathematics for equalities like 0.6666••• = 2/3 where the question arises is the nineteenth-century one. Its fact-ness therefore resembles that of ice being less dense than water, of 19 being prime or of √2 being irrational. It can be demonstrated within a logical framework that systematises our observations of real-world experiences. So it is a fact not about reality but about the models we build to explain reality. Demonstration is the only tool available for establishing its truth.
Mathematics without proof is not like an omelette without salt and pepper; it is like an omelette without egg.
Why should you study paradoxes? The easiest way to answer this question is with a story:
In 2002 I was attending a conference on self-reference in Copenhagen, Denmark. During one of the breaks I got a chance to chat with Raymond Smullyan, who is amongst other things an accomplished magician, a distinguished mathematical logician, and perhaps the most well-known popularizer of `Knight and Knave’ (K&K) puzzles.
K&K puzzles involve an imaginary island populated by two tribes: the Knights and the Knaves. Knights always tell the truth, and Knaves always lie (further, members of both tribes are forbidden to engage in activities that might lead to paradoxes or situations that break these rules). Other than their linguistic behavior, there is nothing that distinguishes Knights from Knaves.
Typically, K&K puzzles involve trying to answer questions based on assertions made by, or questions answered by, an inhabitant of the island. For example, a classic K&K puzzle involves meeting an islander at a fork in the road, where one path leads to riches and success and the other leads to pain and ruin. You are allowed to ask the islander one question, after which you must pick a path. Not knowing to which tribe the islander belongs, and hence whether she will lie or tell the truth, what question should you ask?
(Answer: You should ask “Which path would someone from the other tribe say was the one leading to riches and success?”, and then take the path not indicated by the islander).
Back to Copenhagen in 2002: Seizing my chance, I challenged Smullyan with the following K&K puzzle, of my own devising:
There is a nightclub on the island of Knights and Knaves, known as the Prime Club. The Prime Club has one strict rule: the number of occupants in the club must be a prime number at all times.
The Prime Club also has strict bouncers (who stand outside the doors and do not count as occupants) enforcing this rule. In addition, a strange tradition has become customary at the Prime Club: Every so often the occupants form a conga line, and sing a song. The first lyric of the song is:
“At least one of us in the club is a Knave.”
and is sung by the first person in the line. The second lyric of the song is:
“At least two of us in the club are Knaves.”
and is sung by the second person in the line. The third person (if there is one) sings:
“At least three of us in the club are Knaves.”
And so on down the line, until everyone has sung a verse.
One day you walk by the club, and hear the song being sung. How many people are in the club?
Smullyan’s immediate response to this puzzle was something like “That can’t be solved – there isn’t enough information”. But he then stood alone in the corner of the reception area for about five minutes, thinking, before returning to confidently (and correctly, of course) answer “Two!”
I won’t spoil things by giving away the solution – I’ll leave that mystery for interested readers to solve on their own. (Hint: if the song is sung with any other prime number of islanders in the club, a paradox results!) I will note that the song is equivalent to a more formal construction involving a list of sentences of the form:
At least one of sentences S1 – Sn is false.
At least two of sentences S1 – Sn is false.
At least n of sentences S1 – Sn is false.
The point of this story isn’t to brag about having stumped a famous logician (even for a mere five minutes), although I admit that this episode (not only stumping Smullyan, but meeting him in the first place) is still one of the highlights of my academic career.
Instead, the story, and the puzzle at the center of it, illustrates the reasons why I find paradoxes so fascinating and worthy of serious intellectual effort. The standard story regarding why paradoxes are so important is that, although they are sometimes silly in-and-of-themselves, paradoxes indicate that there is something deeply flawed in our understanding of some basic philosophical notion (truth, in the case of the semantic paradoxes linked to K&K puzzles).
Another reason for their popularity is that they are a lot of fun. Both of these are really good reasons for thinking deeply about paradoxes. But neither is the real reason why I find them so fascinating. The real reason I find paradoxes so captivating is that they are much more mathematically complicated, and as a result much more mathematically interesting, than standard accounts (which typically equate paradoxes with the presence of some sort of circularity) might have you believe.
The Prime Club puzzle demonstrates that whether a particular collection of sentences is or is not paradoxical can depend on all sorts of surprising mathematical properties, such as whether there is an even or odd number of sentences in the collection, or whether the number of sentences in the collection is prime or composite, or all sorts of even weirder and more surprising conditions.
Other examples demonstrate that whether a construction (or, equivalently, a K&K story) is paradoxical can depend on whether the referential relation involved in the construction (i.e. the relation that holds between two sentences if one refers to the other) is symmetric, or is transitive.
The paradoxicality of still another type of construction, involving infinitely many sentences, depends on whether cofinitely many of the sentences each refer to cofinitely many of the other sentences in the construction (a set is cofinite if its complement is finite). And this only scratches the surface!
The more I think about and work on paradoxes, the more I marvel at how complicated the mathematical conditions for generating paradoxes are: it takes a lot more than the mere presence of circularity to generate a mathematical or semantic paradox, and stating exactly what is minimally required is still too difficult a question to answer precisely. And that’s why I work on paradoxes: their surprising mathematical complexity and mathematical beauty. Fortunately for me, there is still a lot of work remains to be done, and a lot of complexity and beauty remaining to be discovered.
A large variety of complex systems in ecology, climate science, biomedicine, and engineering have been observed to exhibit so-called tipping points, where the dynamical state of the system abruptly changes. Typical examples are the rapid transition in lakes from clear to turbid conditions or the sudden extinction of species after a slightly change of environmental conditions. Data and models suggest that detectable warning signs may precede some, though clearly not all, of these drastic events. This view is also corroborated by recently developed abstract mathematical theory for systems, where processes evolve at different rates and are subject to internal and/or external stochastic perturbations.
One main idea to derive warning signs is to monitor the fluctuations of the dynamical process by calculating the variance of a suitable monitoring variable. When the tipping point is approached via a slowly-drifting parameter, the stabilizing effects of the system slowly diminish and the noisy fluctuations increase via certain well-defined scaling laws.
Based upon these observations, it is natural to ask, whether these scaling laws are also present in human social networks and can allow us to make predictions about future events. This is an exciting open problem, to which at present only highly speculative answers can be given. It is indeed to predict a priori unknown events in a social system. Therefore, as an initial step, we try to reduce the problem to a much simpler problem to understand whether the same mechanisms, which have been observed in the context of natural sciences and engineering, could also be present in sociological domains.
In our work, we provide a very first step towards tackling a substantially simpler question by focusing on a priori known events. We analyse a social media data set with a focus on classical variance and autocorrelation scaling law warning signs. In particular, we consider a few events, which are known to occur on a specific time of the year, e.g., Christmas, Halloween, and Thanksgiving. Then we consider time series of the frequency of Twitter hashtags related to the considered events a few weeks before the actual event, but excluding the event date itself and some time period before it.
Now suppose we do not know that a dramatic spike in the number of Twitter hashtags, such as #xmas or #thanksgiving, will occur on the actual event date. Are there signs of the same stochastic scaling laws observed in other dynamical systems visible some time before the event? The more fundamental question is: Are there similarities to known warning signs from other areas also present in social media data?
We answer this question affirmatively as we find that the a priori known events mentioned above are preceded by variance and autocorrelation growth (see Figure). Nevertheless, we are still very far from actually using social networks to predict the occurrence of many other drastic events. For example, it can also be shown that many spikes in Twitter activity are not predictable through variance and autocorrelation growth. Hence, a lot more research is needed to distinguish different dynamical processes that lead to large outburst of activity on social media.
The findings suggest that further investigations of dynamical processes in social media would be worthwhile. Currently, a main focus in the research on social networks lies on structural questions, such as: Who connects to whom? How many connections do we have on average? Who are the hubs in social media? However, if one takes dynamical processes on the network, as well as the changing dynamics of the network topology, into account, one may obtain a much clearer picture, how social systems compare and relate to classical problems in physics, chemistry, biology and engineering.
One of the highest points of the International Congress of Mathematicians, currently underway in Seoul, Korea, is the announcement of the Fields Medal prize winners. The prize is awarded every four years to up to four mathematicians under the age of 40, and is viewed as one of the highest honours a mathematician can receive.
This year sees the first ever female recipient of the Fields Medal, Maryam Mirzakhani, recognised for her highly original contributions to geometry and dynamical systems. Her work bridges several mathematic disciplines – hyperbolic geometry, complex analysis, topology, and dynamics – and influences them in return.
We’re absolutely delighted for Professor Mirzakhani, who serves on the editorial board for International Mathematics Research Notices. To celebrate the achievements of all of the winners, we’ve put together a reading list of free materials relating to their work and to fellow speakers at the International Congress of Mathematicians.
Noted by the International Mathematical Union as work contributing to Mirzakhani’s achievement, this paper investigates the dynamics of the earthquake flow defined by Thurston on the bundle PMg of geodesic measured laminations.
Manjul Bhargava joins Maryam Mirzakhani amongst this year’s winners of the Fields Medal. Here he uses Serre’s mass formula for totally ramified extensions to derive a mass formula that counts all étale algebra extentions of a local field F having a given degree n.
Several authors, some of whom speaking at the International Congress of Mathematicians, have considered whether the ultrapower and the relative commutant of a C*-algebra or II1 factor depend on the choice of the ultrafilter.
Wooley’s paper, as well as his talk at the congress, investigates sums of mixed powers involving two squares, two cubes, and various higher powers concentrating on situations inaccessible to the Hardy-Littlewood method.
When we use a computer, its performance seems to degrade progressively. This is not a mere impression. An old version of Firefox, the free Web browser, was infamous for its “memory leaks”: it would consume increasing amounts of memory to the detriment of other programs. Bugs in the software actually do slow down the system. We all know what the solution is: reboot. We restart the computer, the memory is reset, and the performance is restored, until the bugs slow it down again.
Philosophy is a bit like a computer with a memory leak. It starts well, dealing with significant and serious issues that matter to anyone. Yet, in time, its very success slows it down. Philosophy begins to care more about philosophers’ questions than philosophical ones, consuming increasing amount of intellectual attention. Scholasticism is the ultimate freezing of the system, the equivalent of Windows’ “blue screen of death”; so many resources are devoted to internal issues that no external input can be processed anymore, and the system stops. The world may be undergoing a revolution, but the philosophical discourse remains detached and utterly oblivious. Time to reboot the system.
Philosophical “rebooting” moments are rare. They are usually prompted by major transformations in the surrounding reality. Since the nineties, I have been arguing that we are witnessing one of those moments. It now seems obvious, even to the most conservative person, that we are experiencing a turning point in our history. The information revolution is profoundly changing every aspect of our lives, quickly and relentlessly. The list is known but worth recalling: education and entertainment, communication and commerce, love and hate, politics and conflicts, culture and health, … feel free to add your preferred topics; they are all transformed by technologies that have the recording and processing of information as their core functions. Meanwhile, philosophy is degrading into self-referential discussions on irrelevancies.
The result of a philosophical rebooting today can only be beneficial. Digital technologies are not just tools merely modifying how we deal with the world, like the wheel or the engine. They are above all formatting systems, which increasingly affect how we understand the world, how we relate to it, how we see ourselves, and how we interact with each other.
The ‘Fourth Revolution’ betrays what I believe to be one of the topics that deserves our full intellectual attention today. The idea is quite simple. Three scientific revolutions have had great impact on how we see ourselves. In changing our understanding of the external world they also modified our self-understanding. After the Copernican revolution, the heliocentric cosmology displaced the Earth and hence humanity from the centre of the universe. The Darwinian revolution showed that all species of life have evolved over time from common ancestors through natural selection, thus displacing humanity from the centre of the biological kingdom. And following Freud, we acknowledge nowadays that the mind is also unconscious. So we are not immobile, at the centre of the universe, we are not unnaturally separate and diverse from the rest of the animal kingdom, and we are very far from being minds entirely transparent to ourselves. One may easily question the value of this classic picture. After all, Freud was the first to interpret these three revolutions as part of a single process of reassessment of human nature and his perspective was blatantly self-serving. But replace Freud with cognitive science or neuroscience, and we can still find the framework useful to explain our strong impression that something very significant and profound has recently happened to our self-understanding.
Since the fifties, computer science and digital technologies have been changing our conception of who we are. In many respects, we are discovering that we are not standalone entities, but rather interconnected informational agents, sharing with other biological agents and engineered artefacts a global environment ultimately made of information, the infosphere. If we need a champion for the fourth revolution this should definitely be Alan Turing.
The fourth revolution offers a historical opportunity to rethink our exceptionalism in at least two ways. Our intelligent behaviour is confronted by the smart behaviour of engineered artefacts, which can be adaptively more successful in the infosphere. Our free behaviour is confronted by the predictability and manipulability of our choices, and by the development of artificial autonomy. Digital technologies sometimes seem to know more about our wishes than we do. We need philosophy to make sense of the radical changes brought about by the information revolution. And we need it to be at its best, for the difficulties we are facing are challenging. Clearly, we need to reboot philosophy now.
Subscribe to the OUPblog via email or RSS.
Subscribe to only philosophy articles on the OUPblog via email or RSS.
Image credit: Alan Turing Statue at Bletchley Park. By Ian Petticrew. CC-BY-SA-2.0 via Wikimedia Commons.
Math is on my mind lately as I wrap up the Parallelogram series. (Yes, Dear Readers, Book 4 is coming! There are just so many words.) I, like my main character Audie in the series, enjoy quantum physics but do not enjoy the math. Or, to put it less charitably, cannot do the math.
But I can’t help wondering if I would have had a completely different attitude toward math in school if I’d had a teacher like this. Or at least seen a demonstration like this. Because there’s no doubt Arthur Benjamin makes math FUN. (Although no matter how fun it is, I still think there’s no way mere mortals could do what he does.)
Baseball fans love to compare the players of today to the players who came before, but one must wonder how great the margin of error in these comparisons is. Is there any way of knowing who the real baseball greats are, and whose legend should stand the test of time?
Let’s take Omar Vizquel as an example. So says Wikipedia, “Vizquel is considered one of baseball’s all-time best fielding shortstops.” It’s true, Vizquel “is considered” a great fielder. Of shortstops, he
-holds the highest career fielding percentage of those with a long career.
-has participated in more double plays (and his primary double play partner just entered the Hall of Fame)
-is third in career assists
-has played more games at shortstop than anyone in major league history.
On top of all that, Vizquel has received more Gold Gloves than any other shortstop except for Ozzie “Wizard of Oz” Smith. Indeed, writers have described Omar and Ozzie as the “graceful Fred Astaire” and “acrobatic Gene Kelly,” respectively, of shortstops.
Vizquel has something of a signature play—fielding ordinary grounders (not just bunts) with his bare hand and throwing in one motion. He was the starting shortstop for the most successful American League team of the 1990s, second only to the Yankees. He hasn’t been much of a hitter, even for a shortstop, so it’s not unreasonable to infer he must have been a great fielder to hang on as long as he has.
But, after all that, how do we really Vizquel actually is one of baseball’s all-time best fielding shortstops? With metrics.
Let’s start with the question: What is the job of a fielder? To help his team prevent runs. At shortstop, this mainly involves converting ground balls into outs and getting the second out on double plays—in other words, recording assists. (It is very rare that shortstops catch fly balls or pop ups that couldn’t be fielded by at least two and as many as five other fielders. Most of the differences in putout rates for shortstops reflect how much they ‘hog’ these easy chances, not how many marginal hits they help their teams prevent. And line drive putouts at short are mostly dumb-luck plays.)
It is not the job of a shortstop (or any fielder) to look “graceful” or make trick plays. It’s not even a fielder’s job to avoid errors. In fact, a fielder who makes ten more successful plays but also ten more errors has just the same value as the fielder who makes an average number of plays and errors, because an error is no worse than a play not made.
Any fielding metric for shortstop needs to estimate how many assists a shortstop generated above or below what an average shortstop would have, playing for the same team. My system uses some arithmetic and the statistical technique of “regression analysis,” resulting in what I call Defensive Regression Analysis, or DRA.
DRA estimates the number of assists the league average shortstop would have recorded in place of the shortstop you’re rating by starting with the average number of shortstop assists per team that year and adjusting that number up or down based on statistically significant relationships between shortstop assists and other defensive statistics of the player’s team that are
1. not influenced by the shortstop himself,
2. as little influenced by the fielding quality of his teammates as possible, and
3. independent (approximately) of each ot
With the rising popularity of Android (Google), iPhone, and iPads, I thought it would be a good idea to search for free math-related apps, starting with Android. Unfortunately, I was appalled that many of the popular apps collect too much information, namely, your unique phone id.
What's the big deal? As far as I can tell, your unique phone id is just like your Social Security number—it's not something that you give out to anyone who asks for it. Unfortunately, this is exactly the scenario that I kept finding in the Android market. Why does a flashcard app need the equivalent of your Social Security number? It seems a little fishy to me and I can't recommend those apps.
Happy Tau Day, the most exciting math holiday you’ve yet to discover! Today, June 28th is 6/28, which contains in order the first three digits of tau (τ), the rival of math’s most popular irrational number, pi (π).
In 2001, Bob Palais wrote an article for The Mathematical Investigator called ,“π is wrong!” In it, he insists that the choice of using π in our mathematical formulas for hundreds of years is no good. He argues that the use of τ would simplify many formulas and its derivation is much more intuitive. (Notice that the symbol resembles that for pi, but with one "leg" instead of two.)
The significance of our beloved irrational number π is that it is equal to the ratio of the circumference of any circle to its diameter--in notation, π = C/d. However, the most defining characteristic of a circle is not its diameter but its radius. A circle is defined as the collection of points on a plane that are exactly the same distance, its radius, from a point, its center. Palais argues that intuition should direct us to the use of a more elegant Circle Constant, tau, where τ is the ratio of the circumference of a circle to its radius--in notation, τ = C/r.
Self-described “notorious mathematical propagandist” Michael Hartl takes the argument even further in his now-famous “The Tau Manifesto,” which he published on Tau Day of 2010, exactly one year ago. He demonstrates with many adapted formulas that the factor of 2 is unnecessary if we incorporate it into the ratio itself. For instance, the periods of basic trigonometric functions f(x) = sin(x), and f(x) = cos(x), are in both cases 2π. Why not change them to tau instead? Palais and Hartl each list numerous other examples from calculus and physics, in which the factor of 2 is rendered obsolete by replacing 2π with τ.
The really intuitive part is revealed if you think of angle measure. How things are done now with π, a half turn of the circle is π radians, and a full turn is 2π radians. Should we adopt τ instead, τ radians would be a full turn, τ/2 radians a half turn, τ/4 radians a quarter turn, and so on.
There are, of course, instances where π appears un-doubled. For instance, the formula for area of a circle: A = πr2. Hartl shows, in a mathematically sophisticated way, that the replacement of π by τ even in this instance is the more sound choice, since it is analogous to similar formulas in physics.
An article in today’s BBC News paints the issue as a violent conflict, with pi detractors up in arms over a lifetime of educational betrayal, which seems to this mathematician something of a manufactured controversy. (I can imagine you'd be upset if you are the sort of mathematician that has memorized pi to the nth digit. If you are one of these folks, here's the start for your new parlor trick: reciting tau, 6.283185307...)
They say that there is no such thing as a stupid question. New York State mathematics teachers whose students took the Regents Exam in Algebra 2 and Trigonometry last month (June 2011) are likely to disagree. The test contained a controversial question that asked students to find the inverse of a non-invertible function. Here’s the problem in question:
The problem was in the 2-point, or short answer free response, portion of the exam, testing the learning standard that demands students “determine the inverse of a function and use composition to justify the result.” (A2.A.45) The wording of the question strongly implies that the inverse of the function does indeed exist. However, since the function given is not one-to-one, there is no inverse. Teachers got loud, complaining to representatives of the Board of Regents, the group that writes, edits, and distributes the exam. The Board responded with a memo called, “Scoring Clarification for Teachers,” which acknowledged several ways that students could interpret the question and demonstrate their understanding of invertibility of functions.
Was the response satisfactory? The Board's memo cites “variations in the use of [inverse] notation throughout New York State,” which seems to evade blame for a lousy question. A prominent math teacher blogger responded on his blog, “How could the test-makers not be aware of ‘variations in notation’? Also, notice how there is an asymmetric justification burden on a kid claiming (correctly) that the inverse does not exist.” A lousy question shakes the faith that teachers and students have in the standardized test as a valid assessment of student understanding. For instance, the same blogger concluded, “I have no confidence in New York State’s ability to create a good test of mathematics, at any level.”
It is my sincere hope that this controversy and the appearance of a misleading question will lead to both (a) more opportunities to explore the meaning of invertible functions and one-to-one functions, demanding students to be more savvy test-takers; and (b) increased scrutiny and more careful construction of New York’s Regents exams. In short, as educators, better instruction and better assessment should be our smart answer to this, or any, stupid question.
By Ian Stewart
Falling cats can turn over in mid-air. Well, most cats can. Our first cat, Seamus, didn’t have a clue. My wife, worried he might fall off a fence and hurt himself, tried to train him by holding him over a cushion and letting go. He enjoyed the game, but he never learned how to flip himself over.
This Day in World History - Each evening that weather permitted, Maria (pronounced Mah-RYE-uh) Mitchell mounted the stairs to the roof of her family’s Nantucket home to sweep the sky with a telescope looking for a comet. Mitchell—who had been taught mathematics and astronomy by her father—began the practice in 1836. Eleven years later, on October 1, 1847, her long labors finally paid off. When she saw the comet, she quickly summoned her father, who agreed with her conclusion.
Among mathematicians, it is always a happy moment when a long-standing problem is suddenly solved. The year 2012 started with such a moment, when an Irish mathematician named Gary McGuire announced a solution to the minimal-clue problem for Sudoku puzzles.
You have seen Sudoku puzzles, no doubt, since they are nowadays ubiquitous in newspapers and magazines. They look like this:
Your task is to fill in the vacant cells with the digits from 1-9 in such a way that each row, column and three by three block contains each digit exactly once. In a proper puzzle, the starting clues are such as to guarantee there is only one way of completing the square.
This particular puzzle has just seventeen starting clues. It had long been believed that seventeen was the minimum number for any proper puzzle. Mathematician Gordon Royle maintains an online database which currently contains close to fifty thousand puzzles with seventeen starting clues (in fact, the puzzle above is adapted from one of the puzzles in that list). However, despite extensive computer searching, no example of a puzzle with sixteen or fewer clues had ever been found.
The problem was that an exhaustive computer search seemed impossible. There were simply too many possibilities to consider. Even using the best modern hardware, and employing the most efficient search techniques known, hundreds of thousands of years would have been required.
Pure mathematics likewise provided little assistance. It is easy to see that seven clues must be insufficient. With seven starting clues there would be at least two digits that were not represented at the start of the puzzle. To be concrete, let us say that there were no 1s or 2s in the starting grid. Then, in any completion of the starting grid it would be possible simply to change all the 1s to 2s, and all the 2s to 1s, to produce a second valid solution to the puzzle. After making this observation, however, it is already unclear how to continue. Even a simple argument proving the insufficiency of eight clues has proven elusive.
McGuire’s solution requires a combination of mathematics and computer science. To reduce the time required for an exhaustive search he employed the idea of an “unavoidable set.” Consider the shaded cells in this Sudoku square:
Now imagine a starting puzzle having this square for a solution. Can you see why we would need to have at least one starting clue in one of those shaded cells? The reason is that if we did not, then we would be able to toggle the digits in those cells to produce a second solution to the same puzzle. In fact, this particular Sudoku square has a lot of similar unavoidable sets; in general some squares will have more than others, and of different types. Part of McGuire’s solution involved finding a large collection of certain types of unavoidable sets in every Sudoku square under consideration.
Finding these unavoidable sets permits a dramatic reduction in the size of the space that must be searched. Rather than searching through every sixteen-clue subset of a given Sudoku square, desperately looking for one that is actually a proper puzzle, we need only consider sets of sixteen starting clues containing at l
Halloween has always been a fun time of year for me. I love dressing up in costume. It's very much like creating the characters in my stories, only in costume I become a character for real. In fact, I bring some costume pieces along with me when I do school visits and help the students devise new and interesting characters.
So today's post is a collection of interesting Halloween(ish) news I've unearthed of late.
Of course, you know I love libraries, so how cool is a haunted one? That's right, in Deep River, Connecticut, the public library (a former home built in 1881 by a local businessman) has not just one ghost but many. Wouldn't that make for some interesting storytimes?
The American Library Association's GREAT WEBSITES FOR KIDS isn't too scary, but there are a frightfully wonderful number of cool places to visit there. Take for example this website on BATS--the kind that fly in the night. That's kind of spooky.
Or try National Geographic's CAT site. Have you ever seen a cat skeleton?
So I admit, Math was always a little scary for me. That's why I've included this site here called COOL MATH--An Amusement Park of Math and More. Check it out for puzzles, games, and Bubba Man in his awesome Halloween costume.
If all these Halloween antics make you hungry, stop by the For Kids section here on my site and find the recipe for SPIDER SNACKS. Then you can munch along as you do the HALLOWEEN CROSSWORD, lurking just around the corner.
Do you remember with fondness the thrill of mathematical discovery? Your first geometry proof, using pi to calculate areas of circles, the imaginary number i, Pascal’s Triangle, and the Fibonacci Sequence may be distant memories, but the concepts still intrigue you. If you’re a math geek like I am, reading the Ponderables Illustrated History of Numbers is the perfect way to capture the joy you once felt.
Mathematical principles have not always been known. They developed throughout the ages by some of the masterminds of the sciences. In this illustrated history, we learn who was responsible for significant discoveries, and how they came about. One hundred “Ponderables” are presented for our enjoyment and enlightenment.
I have to admit, I am very biased in reviewing this book. I have always loved mathematics. Its perfect logic, symmetry, and order have been constant companions for me. And if you also love this exact science, you’ll love this book. I felt like I had taken a trip back to my favorite high school math classes. This is the perfect gift for any serious math student or your favorite math teacher. And if math isn’t your favorite subject, look for Ponderables in Chemistry, Space, Physics, Philosophy, and Computing.
29 November 2012 is the 140th anniversary of the death of mathematician Mary Somerville, the nineteenth century’s “Queen of Science”. Several years after her death, Oxford University’s Somerville College was named in her honor — a poignant tribute because Mary Somerville had been completely self-taught. In 1868, when she was 87, she had signed J. S. Mill’s (unsuccessful) petition for female suffrage, but I think she’d be astonished that we’re still debating “the woman question” in science. Physics, in particular — a subject she loved, especially mathematical physics — is still a very male-dominated discipline, and men as well as women are concerned about it.
Of course, science today is far more complex than it was in Somerville’s time, and for the past forty years feminist critics have been wondering if it’s the kind of science that women actually want; physics, in particular, has improved the lives of millions of people over the past 300 years, but it’s also created technologies and weapons that have caused massive human, social and environmental destruction. So I’d like to revisit an old debate: are science’s obstacles for women simply a matter of managing its applications in a more “female-friendly” way, or is there something about its exclusively male origins that has made science itself sexist?
To manage science in a more female-friendly way, it would be interesting to know if there’s any substance behind gender stereotypes such as that women prefer to solve immediate human problems, and are less interested than men in detached, increasingly expensive fundamental research, and in military and technological applications. Either way, though, it’s self-evident that women should have more say in how science is applied and funded, which means it’s important to have more women in decision-making positions — something we’re still far from achieving.
But could the scientific paradigm itself be alienating to women? Mary Somerville didn’t think so, but it’s often argued (most recently by some eco-feminist and post-colonial critics) that the seventeenth-century Scientific Revolution, which formed the template for modern science, was constructed by European men, and that consequently, the scientific method reflects a white, male way of thinking that inherently preferences white men’s interests and abilities over those of women and non-Westerners. It’s a problematic argument, but justification for it has included an important critique of reductionism — namely, that Western male experimental scientists have traditionally studied physical systems, plants, and even human bodies by dissecting them, studying their components separately and losing sight of the whole system or organism.
The limits of the reductionist philosophy were famously highlighted in biologist Rachel Carson’s book, Silent Spring, which showed that the post-War boom in chemical pest control didn’t take account of the whole food chain, of which insects are merely a part. Other dramatic illustrations are climate change, and medical disasters like the thalidomide tragedy: clearly, it’s no longer enough to focus selectively on specific problems such as the action of a drug on a particular symptom, or the local effectiveness of specific technologies; instead, scientists must consider the effect of a drug or medical procedure on the whole person, whilst new technological inventions shouldn’t be separated from their wider social and environmental ramifications.
In its proper place, however, reductionism in basic scientific research is important. (The recent infamous comment by American Republican Senate nominee Todd Akin — that women can “shut down” their bodies during a “legitimate rape”, in order not to become pregnant — illustrates the need for a basic understanding of how the various parts of the human body work.) I’m not sure if this kind of reductionism is a particularly male or particularly Western way of thinking, but either way there’s much more to the scientific method than this; it’s about developing testable hypotheses from observations (reductionist or holistic), and then testing those hypotheses in as objective a way as possible. The key thing in observing the world is curiosity, and this is a human trait, discernible in all children, regardless of race or gender. Of course, girls have traditionally faced more cultural restraints than boys, so perhaps we still need to encourage girls to be actively curious about the world around them. (For instance, it’s often suggested that women prefer biology to physics because they want to help people — and yet, many of the recent successes in medical and biological science would have been impossible without the technology provided by fundamental, curiosity-driven physics.)
Like Mary Somerville, I think the scientific method has universal appeal, but I also think feminist and other critics are right to question its patriarchal and capitalist origins. Although science at its best is value-free, it’s part of the broader community, whose values are absorbed by individual scientists. So much so that Yale researchers Moss-Racusin et al recently uncovered evidence that many scientists themselves, male and female, have an unconscious sexist bias. In their widely reported study, participants judged the same job application (for a lab manager position) to be less competent if it had a (randomly assigned) female name than if it had a male name.
In Mary Somerville’s day, such bias was overt, and it had the authority of science itself: women’s smaller brain size was considered sufficient to “prove” female intellectual inferiority. It was bad science, and it shows how patriarchal perceptions can skew the interpretation not just of women’s competence, but also of scientific data itself. (Without proper vigilance, this kind of subjectivity can slip through the safeguards of the scientific method because of other prejudices, too, such as racism, or even the agendas of funding bodies.) Of course, acknowledging the existence of patriarchal values in society isn’t about hating men or assuming men hate women. Mary Somerville met with “the utmost kindness” from individual scientific men, but that didn’t stop many of them from seeing her as the exception that proved the male-created rule of female inferiority. After all, it takes analysis and courage to step outside a long-accepted norm. And so, the “woman question” is still with us — but in trying to resolve it, we might not only find ways to remove existing gender biases, but also broaden the conversation about what sort of science we all want in the twenty-first century.
Three words to sum up Alan Turing? Humour. He had an impish, irreverent and infectious sense of humour. Courage. Isolation. He loved to work alone. Reading his scientific papers, it is almost as though the rest of the world — the busy community of human minds working away on the same or related problems — simply did not exist. Turing was determined to do it his way. Three more words? A patriot. Unconventional — he was uncompromisingly unconventional, and he didn’t much care what other people thought about his unusual methods. A genius. Turing’s brilliant mind was sparsely furnished, though. He was a Spartan in all things, inner and outer, and had no time for pleasing décor, soft furnishings, superfluous embellishment, or unnecessary words. To him what mattered was the truth. Everything else was mere froth. He succeeded where a better furnished, wordier, more ornate mind might have failed. Alan Turing changed the world.
What would it have been like to meet him? Turing was tallish (5 feet 10 inches) and broadly built. He looked strong and fit. You might have mistaken his age, as he always seemed younger than he was. He was good looking, but strange. If you came across him at a party you would notice him all right. In fact you might turn round and say “Who on earth is that?” It wasn’t just his shabby clothes or dirty fingernails. It was the whole package. Part of it was the unusual noise he made. This has often been described as a stammer, but it wasn’t. It was his way of preventing people from interrupting him, while he thought out what he was trying to say. Ah – Ah – Ah – Ah – Ah. He did it loudly.
If you crossed the room to talk to him, you’d probably find him gauche and rather reserved. He was decidedly lah-di-dah, but the reserve wasn’t standoffishness. He was a man of few words, shy. Polite small talk did not come easily to him. He might if you were lucky smile engagingly, his blue eyes twinkling, and come out with something quirky that would make you laugh. If conversation developed you’d probably find him vivid and funny. He might ask you, in his rather high-pitched voice, whether you think a computer could ever enjoy strawberries and cream, or could make you fall in love with it. Or he might ask if you can say why a face is reversed left to right in a mirror but not top to bottom.
Once you got to know him Turing was fun — cheerful, lively, stimulating, comic, brimming with boyish enthusiasm. His raucous crow-like laugh pealed out boisterously. But he was also a loner. “Turing was always by himself,” said codebreaker Jerry Roberts: “He didn’t seem to talk to people a lot, although with his own circle he was sociable enough.” Like everyone else Turing craved affection and company, but he never seemed to quite fit in anywhere. He was bothered by his own social strangeness — although, like his hair, it was a force of nature he could do little about. Occasionally he could be very rude. If he thought that someone wasn’t listening to him with sufficient attention he would simply walk away. Turing was the sort of man who, usually unintentionally, ruffled people’s feathers — especially pompous people, people in authority, and scientific poseurs. He was moody too. His assistant at the National Physical Laboratory, Jim Wilkinson, recalled with amusement that there were days when it was best just to keep out of Turing’s way. Beneath the cranky, craggy, irreverent exterior there was an unworldly innocence though, as well as sensitivity and modesty.
Turing died at the age of only 41. His ideas lived on, however, and at the turn of the millennium Time magazine listed him among the twentieth century’s 100 greatest minds, alongside the Wright brothers, Albert Einstein, DNA busters Crick and Watson, and the discoverer of penicillin, Alexander Fleming. Turing’s achievements during his short life were legion. Best known as the man who broke some of Germany’s most secret codes during the war of 1939-45, Turing was also the father of the modern computer. Today, all who click, tap or touch to open are familiar with the impact of his ideas. To Turing we owe the brilliant innovation of storing applications, and all the other programs necessary for computers to do our bidding, inside the computer’s memory, ready to be opened when we wish. We take for granted that we use the same slab of hardware to shop, manage our finances, type our memoirs, play our favourite music and videos, and send instant messages across the street or around the world. Like many great ideas this one now seems as obvious as the wheel and the arch, but with this single invention — the stored-program universal computer — Turing changed the way we live. His universal machine caught on like wildfire; today personal computer sales hover around the million a day mark. In less than four decades, Turing’s ideas transported us from an era where ‘computer’ was the term for a human clerk who did the sums in the back office of an insurance company or science lab, into a world where many young people have never known life without the Internet.
This year, 2012, marks the 325th anniversary of the first publication of the legendary Principia (Mathematical Principles of Natural Philosophy), the 500-page book in which Sir Isaac Newton presented the world with his theory of gravity. It was the first comprehensive scientific theory in history, and it’s withstood the test of time over the past three centuries.
Unfortunately, this superb legacy is often overshadowed, not just by Einstein’s achievement but also by Newton’s own secret obsession with Biblical prophecies and alchemy. Given these preoccupations, it’s reasonable to wonder if he was quite the modern scientific guru his legend suggests, but personally I’m all for celebrating him as one of the greatest geniuses ever. Although his private obsessions were excessive even for the seventeenth century, he was well aware that in eschewing metaphysical, alchemical, and mystical speculation in his Principia, he was creating a new way of thinking about the fundamental principles underlying the natural world. To paraphrase Newton himself, he changed the emphasis from metaphysics and mechanism to experiment and mathematical analogy. His method has proved astonishingly fruitful, but initially it was quite controversial.
He had developed his theory of gravity to explain the cause of the mysterious motion of the planets through the sky: in a nutshell, he derived a formula for the force needed to keep a planet moving in its observed elliptical orbit, and he connected this force with everyday gravity through the experimentally derived mathematics of falling motion. Ironically (in hindsight), some of his greatest peers, like Leibniz and Huygens, dismissed the theory of gravity as “mystical” because it was “too mathematical.” As far as they were concerned, the law of gravity may have been brilliant, but it didn’t explain how an invisible gravitational force could reach all the way from the sun to the earth without any apparent material mechanism. Consequently, they favoured the mainstream Cartesian “theory”, which held that the universe was filled with an invisible substance calledether, whose material nature was completely unknown, but which somehow formed into great swirling whirlpools that physically dragged the planets in their orbits.
The only evidence for this vortex “theory” was the physical fact of planetary motion, but this fact alone could lead to any number of causal hypotheses. By contrast, Newton explained the mystery of planetary motion in terms of a known physical phenomenon, gravity; he didn’t need to postulate the existence of fanciful ethereal whirlpools. As for the question of how gravity itself worked, Newton recognized this was beyond his scope — a challenge for posterity — but he knew that for the task at hand (explaining why the planets move) “it is enough that gravity really exists and acts according to the laws that we have set forth and is sufficient to explain all the motions of the heavenly bodies…”
What’s more, he found a way of testing his theory by using his formula for gravitational force to make quantitative predictions. For instance, he realized that comets were not random, unpredictable phenomena (which the superstitious had feared as fiery warnings from God), but small celestial bodies following well-defined orbits like the planets. His friend Halley famously used the theory of gravity to predict the date of return of the comet now named after him. As it turned out, Halley’s prediction was fairly good, although Clairaut — working half a century later but just before the predicted return of Halley’s comet — used more sophisticated mathematics to apply Newton’s laws to make an even more accurate prediction.
Clairaut’s calculations illustrate the fact that despite the phenomenal depth and breadth of Principia, it took a further century of effort by scores of mathematicians and physicists to build on Newton’s work and to create modern “Newtonian” physics in the form we know it today. But Newton had created the blueprint for this science, and its novelty can be seen from the fact that some of his most capable peers missed the point. After all, he had begun the radical process of transforming “natural philosophy” into theoretical physics — a transformation from traditional qualitative philosophical speculation about possible causes of physical phenomena, to a quantitative study of experimentally observed physical effects. (From this experimental study, mathematical propositions are deduced and then made general by induction, as he explained in Principia.)
Even the secular nature of Newton’s work was controversial (and under apparent pressure from critics, he did add a brief mention of God in an appendix to later editions of Principia). Although Leibniz was a brilliant philosopher (and he was also the co-inventor, with Newton, of calculus), one of his stated reasons for believing in the ether rather than the Newtonian vacuum was that God would show his omnipotence by creating something, like the ether, rather than leaving vast amounts of nothing. (At the quantum level, perhaps his conclusion, if not his reasoning, was right.) He also invoked God to reject Newton’s inspired (and correct) argument that gravitational interactions between the various planets themselves would eventually cause noticeable distortions in their orbits around the sun; Leibniz claimed God would have had the foresight to give the planets perfect, unchanging perpetual motion. But he was on much firmer ground when he questioned Newton’s (reluctant) assumption of absolute rather than relative motion, although it would take Einstein to come up with a relativistic theory of gravity.
Einstein’s theory is even more accurate than Newton’s, especially on a cosmic scale, but within its own terms — that is, describing the workings of our solar system (including, nowadays, the motion of our own satellites) — Newton’s law of gravity is accurate to within one part in ten million. As for his method of making scientific theories, it was so profound that it underlies all the theoretical physics that has followed over the past three centuries. It’s amazing: one of the most religious, most mystical men of his age put his personal beliefs aside and created the quintessential blueprint for our modern way of doing science in the most objective, detached way possible. Einstein agreed; he wrote a moving tribute in the London Times in 1919, shortly after astronomers had provided the first experimental confirmation of his theory of general relativity:
“Let no-one suppose, however, that the mighty work of Newton can really be superseded by [relativity] or any other theory. His great and lucid ideas will retain their unique significance for all time as the foundation of our modern conceptual structure in the sphere of [theoretical physics].”
This year marked the centenary of the birth of Alan Mathison Turing; among the many, many commemorative events that occurred during the Alan Turing Year were the reissues of two biographies of AMT. One was Andrew Hodges's extraordinary work Alan Turing: The Enigma. The other was Sara Turing's long-unavailable book about her son, simply titled [...]
Two contrasting experiences stick in mind from my first year at university.
First, I spent a lot of time in lectures that I did not understand. I don’t mean lectures in which I got the general gist but didn’t quite follow the technical details. I mean lectures in which I understood not one thing from the beginning to the end. I still went to all the lectures and wrote everything down – I was a dutiful sort of student – but this was hardly the ideal learning experience.
Second, at the end of the year, I was awarded first class marks. The best thing about this was that later that evening, a friend came up to me in the bar and said, “Hey Lara, I hear you got a first!” and I was rapidly surrounded by other friends offering enthusiastic congratulations. This was a revelation. I had attended the kind of school at which students who did well were derided rather than congratulated. I was delighted to find myself in a place where success was celebrated.
Looking back, I think that the interesting thing about these two experiences is the relationship between the two. How could I have done so well when I understood so little of so many lectures?
I don’t think that there was a problem with me. I didn’t come out at the very top, but obviously I had the ability and dedication to get to grips with the mathematics. Nor do I think that there was a problem with the lecturers. Like the vast majority of the mathematicians I have met since, my lecturers cared about their courses and put considerable effort into giving a logically coherent presentation. Not all were natural entertainers, but there was nothing fundamentally wrong with their teaching.
I now think that the problems were more subtle, and related to two issues in particular.
First, there was a communication gap: the lecturers and I did not understand mathematics in the same way. Mathematicians understand mathematics as a network of axioms, definitions, examples, algorithms, theorems, proofs, and applications. They present and explain these, hoping that students will appreciate the logic of the ideas and will think about the ways in which they can be combined. I didn’t really know how to learn effectively from lectures on abstract material, and research indicates that I was pretty typical in this respect.
Students arrive at university with a set of expectations about what it means to ‘do mathematics’ – about what kind of information teachers will provide and about what students are supposed to do with it. Some of these expectations work well at school but not at university. Many students need to learn, for instance, to treat definitions as stipulative rather than descriptive, to generate and check their own examples, to interpret logical language in a strict, mathematical way rather than a more flexible, context-influenced way, and to infer logical relationships within and across mathematical proofs. These things are expected, but often they are not explicitly taught.
My second problem was that I didn’t have very good study skills. I wasn’t terrible – I wasn’t lazy, or arrogant, or easily distracted, or unwilling to put in the hours. But I wasn’t very effective in deciding how to spend my study time. In fact, I don’t remember making many conscious decisions about it at all. I would try a question, find it difficult, stare out of the window, become worried, attempt to study some section of my lecture notes instead, fail at that too, and end up discouraged. Again, many students are like this. I have met a few who probably should have postponed university until they were ready to exercise some self-discipline, but most do want to learn.
What they lack is a set of strategies for managing their learning – for deciding how to distribute their time when no-one is checking what they’ve done from one class to the next, and for maintaining momentum when things get difficult. Many could improve their effectiveness by doing simple things like systematically prioritizing study tasks, and developing a routine in which they study particular subjects in particular gaps between lectures. Again, the responsibility for learning these skills lies primarily with the student.
Personally, I never got to a point where I understood every lecture. But I learned how to make sense of abstract material, I developed strategies for studying effectively, and I maintained my first class marks. What I would now say to current students is this: take charge. Find out what lecturers and tutors are expecting, and take opportunities to learn about good study habits. Students who do that should find, like I did, that undergraduate mathematics is challenging, but a pleasure to learn.
Subscribe to the OUPblog via email or RSS. Subscribe to only mathematics articles on the OUPblog via email or RSS.
Subscribe to only education articles on the OUPblog via email or RSS. Image credit: Screenshot of Oxford English Dictionary definition of mathematics, n., via OED Online. All rights reserved.
Ducklings in a Row by Renee Heiss illustrated by Matthew B. Holcomb Character Publishing 4 Star . Back Cover: When Mama Duck asks her ducklings to arrange themselves from One to Ten, the baby ducks learn much more than sequencing skills. In Ducklings in a Row, ten unique duckling personalities combine to gorm a humorous …
Nowadays it appears impossible to open a newspaper or switch on the television without hearing about “big data”. Big data, it sometimes seems, will provide answers to all the world’s problems. Management consulting company McKinsey, for example, promises “a tremendous wave of innovation, productivity, and growth … all driven by big data”.
An alien observer visiting the Earth might think it represents a major scientific breakthrough. Google Trends shows references to the phrase bobbing along at about one per week until 2011, at which point there began a dramatic, steep, and almost linear increase in references to the phrase. It’s as if no one had thought of it until 2011. Which is odd because data mining, the technology of extracting valuable, useful, or interesting information from large data sets, has been around for some 20 years. And statistics, which lies at the heart of all of this, has been around as a formal discipline for a century or more.
Or perhaps it’s not so odd. If you look back to the beginning of data mining, you find a very similar media enthusiasm for the advances it was going to bring, the breakthroughs in understanding, the sudden discoveries, the deep insights. In fact it almost looks as if we have been here before. All of this leads one to suspect that there’s less to the big data enthusiasm than meets the eye. That it’s not so much a sudden change in our technical abilities as a sudden media recognition of what data scientists, and especially statisticians, are capable.
Of course, I’m not saying that the increasing size of data sets does not lead to promising new opportunities – though I would question whether it’s the “large” that really matters as much as the novelty of the data sets. The tremendous economic impact of GPS data (estimated to be $150-270bn per year), retail transaction data, or genomic and bioinformatics data arise not from the size of these data sets, but from the fact that they provide new kinds of information. And while it’s true that a massive mountain of data needed to be explored to detect the Higgs boson, the core aspect was the nature of the data rather than its amount.
Moreover, if I’m honest, I also have to admit that it’s not solely statistics which leads to the extraction of value from these massive data sets. Often it’s a combination of statistical inferential methods (e.g. determining an accurate geographical location from satellite signals) along with data manipulation algorithms for search, matching, sorting and so on. How these two aspects are balanced depends on the particular application. Locating a shop which stocks that out of print book is less of an inferential statistical problem and more of a search issue. Determining the riskiness of a company seeking a loan owes little to search but much to statistics.
Diagram of Total Information Awareness system designed by the Information Awareness Office
Some time after the phrase “data mining” hit the media, it suffered a backlash. Predictably enough, much of this was based around privacy concerns. A paradigmatic illustration was the Total Information Awareness project in the United States. Its basic aim was to search for suspicious behaviour patterns within vast amounts of personal data, to identify individuals likely to commit crimes, especially terrorist offences. It included data on web browsing, credit card transactions, driving licences, court records, passport details, and so on. After concerns were raised, it was suspended in 2003 (though it is claimed that the software continued to be used by various agencies). As will be evident from recent events, concerns about the security agencies monitoring of the public continues.
Technology is amoral — neither intrinsically moral nor immoral. Morality lies in the hands of those who wield it. This is as true of big data technology as it is of nuclear technology and biotechnology. It is abundantly clear — if only from the examples we have already seen — that massive data sets do hold substantial promise for enhancing the well-being of mankind, but we must be aware of the risks. A suitable balance must be struck.
It’s also important to note that the mere existence of huge data files is of itself of no benefit to anyone. For these data sets to be beneficial, it’s necessary to be able to use the data to build models, to estimate effect sizes, to determine if an observed effect should be regarded as mere chance variation, to be sure it’s not a data quality issue, and so on. That is, statistical skills are critical to making use of the big data resources. In just the same way that vast underground oil reserves were useless without the technology to turn them into motive power, so the vast collections of data are useless without the technology to analyse them. Or, as I sometimes put it, people don’t want data, what they want are answers. And statistics provides the tools for finding those answers.
Subscribe to the OUPblog via email or RSS.
Subscribe to only mathematics articles on the OUPblog via email or RSS Image credit: Diagram of Total Information Awareness system designed by the Information Awareness Office. Public domain via Wikimedia Commons