What is JacketFlap

  • JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans.
    Join now (it's free).

Sort Blog Posts

Sort Posts by:

  • in
    from   

Suggest a Blog

Enter a Blog's Feed URL below and click Submit:

Most Commented Posts

In the past 7 days

Recent Comments

JacketFlap Sponsors

Spread the word about books.
Put this Widget on your blog!
  • Powered by JacketFlap.com

Are you a book Publisher?
Learn about Widgets now!

Advertise on JacketFlap

MyJacketFlap Blogs

  • Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.

Blog Posts by Date

Click days in this calendar to see posts by day or month
new posts in all blogs
Viewing: Blog Posts Tagged with: Mathematics, Most Recent at Top [Help]
Results 1 - 25 of 45
1. Celebrating Women in STEM

It is becoming widely accepted that women have, historically, been underrepresented and often completely written out of work in the fields of Science, Technology, Engineering, and Mathematics (STEM). Explanations for the gender gap in STEM fields range from genetically-determined interests, structural and territorial segregation, discrimination, and historic stereotypes. As well as encouraging steps toward positive change, we would also like to retrospectively honour those women whose past works have been overlooked.

From astronomer Caroline Herschel to the first female winner of the Fields Medal, Maryam Mirzakhani, you can use our interactive timeline to learn more about the women whose works in STEM fields have changed our world.

With free Oxford University Press content, we tell the stories and share the research of both famous and forgotten women.

Featured image credit: Microscope. Public Domain via Pixabay.

The post Celebrating Women in STEM appeared first on OUPblog.

0 Comments on Celebrating Women in STEM as of 1/23/2015 12:03:00 AM
Add a Comment
2. Why causality now?

Head hits cause brain damage, but not always. Should we ban sport to protect athletes? Exposure to electromagnetic fields is strongly associated with cancer development. Should we ban mobile phones and encourage old-fashioned wired communication? The sciences are getting more and more specialized and it is difficult to judge whether, say, we should trust homeopathy, fund a mission to Mars, or install solar panels on our roofs. We are confronted with questions about causality on an everyday basis, as well as in science and in policy.

Causality has been a headache for scholars since ancient times. The oldest extensive writings may have been Aristotle, who made causality a central part of his worldview. Then we jump 2,000 years until causality again became a prominent topic with Hume, who was a skeptic, in the sense that he believed we cannot think of causal relationships as logically necessary, nor can we establish them with certainty.

The next major philosophical figure after Hume was probably David Lewis, who proposed quite a controversial account saying roughly that something was a cause of an effect in this world if, in other nearby possible worlds where that cause didn’t happen, the effect didn’t happen either. Currently, we come to work in computer science originated by Judea Pearl and by Spirtes, Glymour and Scheines and collaborators.

All of this is highly theoretical and formal. Can we reconstruct philosophical theorizing about causality in the sciences in simpler terms than this? Sure we can!

One way is to start from scientific practice. Even though scientists often don’t talk explicitly about causality, it is there. Causality is an integral part of the scientific enterprise. Scientists don’t worry too much about what causality is­ – a chiefly metaphysical question – but are instead concerned with a number of activities that, one way or another, bear on causal notions. These are what we call the five scientific problems of causality:

8529449382_85663d5f6a_o
Phrenology: causality, mirthfulness, and time. Photo by Stuart, CC-BY-NC-ND-2.0 via Flickr.
  • Inference: Does C cause E? To what extent?
  • Explanation: How does C cause or prevent E?
  • Prediction: What can we expect if C does (or does not) occur?
  • Control: What factors should we hold fixed to understand better the relation between C and E? More generally, how do we control the world or an experimental setting?
  • Reasoning: What considerations enter into establishing whether/how/to what extent C causes E?

This does not mean that metaphysical questions cease to be interesting. Quite the contrary! But by engaging with scientific practice, we can work towards a timely and solid philosophy of causality.

The traditional philosophical treatment of causality is to give a single conceptualization, an account of the concept of causality, which may also tell us what causality in the world is, and may then help us understand causal methods and scientific questions.

Our aim, instead, is to focus on the scientific questions, bearing in mind that there are five of them, and build a more pluralist view of causality, enriched by attention to the diversity of scientific practices. We think that many existing approaches to causality, such as mechanism, manipulationism, inferentialism, capacities and processes can be used together, as tiles in a causal mosaic that can be created to help you assess, develop, and criticize a scientific endeavour.

In this spirit we are attempting to develop, in collaboration, complementary ideas of causality as information (Illari) and variation (Russo). The idea is that we can conceptualize in general terms the causal linking or production of effect by the cause as the transmission of information between cause and effect (following Salmon); while variation is the most general conceptualization of the patterns of difference-making we can detect in populations where a cause is acting (following Mill). The thought is that we can use these complementary ideas to address the scientific problems.

For example, we can think about how we use complementary evidence in causal inference, tracking information transmission, and combining that with studies of variation in populations. Alternatively, we can think about how measuring variation may help us formulate policy decisions, as might seeking to block possible avenues of information transmission. Having both concepts available assists in describing this, and reasoning well – and they will also be combined with other concepts that have been made more precise in the philosophical literature, such as capacities and mechanisms.

Ultimately, the hope is that sharpening up the reasoning will assist in the conceptual enterprise that lies at the intersection of philosophy and science. And help decide whether to encourage sport, mobile phones, homeopathy and solar panels aboard the mission to Mars!

The post Why causality now? appeared first on OUPblog.

0 Comments on Why causality now? as of 1/18/2015 5:27:00 AM
Add a Comment
3. Accusation breeds guilt

One of the central tasks when reading a mystery novel (or sitting on a jury) is figuring out which of the characters are trustworthy. Someone guilty will of course say they aren’t guilty, just like the innocent – the real question in these situations is whether we believe them.

The guilty party – let’s call her Annette – can try to convince us of her trustworthiness by only saying things that are true, insofar as such truthfulness doesn’t incriminate her (the old adage of making one’s lies as close to the truth as possible applies here). But this is not the only strategy available. In addition, Annette can attempt to deflect suspicion away from herself by questioning the trustworthiness of others – in short, she can say something like:

“I’m not a liar, Betty is!”

However, accusations of untrustworthiness of this sort are peculiar. The point of Annette’s pronouncement is to affirm her innocence, but such protestations rarely increase our overall level of trust. Either we don’t believe Annette, in which case our trust in Annette is likely to drop (without affecting how much we trust Betty), or we do believe Annette, in which case our trust in Betty is likely to decrease (without necessarily increasing our overall trust in Annette).

Thus, accusations of untrustworthiness tend to decrease the overall level of trust we place in those involved. But is this reflective of an actual increase in the number of lies told? In other words, does the logic of such accusations makes it the case that, the higher the number of accusations, the higher the number of characters that must be lying?

Consider a group of people G, and imagine that, simultaneously, each person in the group accuses one, some, or all of the other people in the group of lying right at this minute. For example, if our group consists of three people:

G = {Annette, Betty, Charlotte}

then Betty can make one of three distinct accusations:

justice
Scales of justice, photo by Michael Coghlan CC-BY-SA-2.0 via Flickr

“Annette is lying.”

“Charlotte is lying.”

“Both Annette and Charlotte are lying.”

Likewise, Annette and Charlotte each have three choices regarding their accusations. We can then ask which members of the group could be, or which must be, telling the truth, and which could be, or which must be, lying by examining the logical relations between the accusations made by each member of the group. For example, if Annette accuses both Betty and Charlotte of lying, then either (i) Annette is telling the truth, in which case both Betty and Charlotte’s accusations must be false, or (ii) Annette is lying, in which case either Betty is telling the truth or Charlotte is telling the truth (or both).

This set-up allows for cases that are paradoxical. If:

Annette says “Betty is lying.”

Betty says “Charlotte is lying.”

Charlotte says “Annette is lying.”

then there is no coherent way to assign the labels “liar” and “truth-teller” to the three in such a way as to make sense. Since we are here interested in investigating results regarding how many lies are told (rather than scenarios in which the notion of lying versus telling the truth breaks down), we shall restrict our attention to those groups, and their accusations, that are not paradoxical.

The following are two simple results that constraint the number of liars, and the number of truth-tellers, in any such group (I’ll provide proofs of these results in the comments after a few days).

“Accusations of untrustworthiness tend to decrease the overall level of trust we place in those involved”

Result 1: If, for some number m, each person in the group accuses at least m other people in the group of lying (and there is no paradox) then there are at least m liars in the group.

Result 2: If, for any two people in the group p1 and p2, either p1 accuses p2 of lying, or p2 accuses p1 of lying (and there is no paradox), then exactly one person in the group is telling the truth, and everyone else is lying.

These results support an affirmative answer to our question: Given a group of people, the more accusations of untrustworthiness (i.e., of lying) are made, the higher the minimum number of people in the group that must be lying. If there are enough accusations to guarantee that each person accuses at least n people, then there are at least n liars, and if there are enough to guarantee that there is an accusation between each pair of people, then all but one person is lying. (Exercise for the reader: show that there is no situation of this sort where everyone is lying).

Of course, the set-up just examined is extremely simple, and rather artificial. Conversations (or mystery novels, or court cases, etc.) in real life develop over time, involve all sorts of claims other than accusations, and can involve accusations of many different forms not included above, including:

“Everything Annette says is a lie!”

“Betty said something false yesterday!”

“What Charlotte is about to say is a lie!”

Nevertheless, with a bit more work (which I won’t do here) we can show that, the more accusations of untrustworthiness are made in a particular situation, the more of the claims made in that situation must be lies (of course, the details will depend both on the number of accusations and the kind of accusations). Thus, it’s as the title says: accusation breeds guilt!

Note: The inspiration for this blog post, as well as the phrase “Accusation breeds guilt” comes from a brief discussion of this phenomenon – in particular, of ‘Result 2′ above – in ‘Propositional Discourse Logic’, by S. Dyrkolbotn & M. Walicki, Synthese 191: 863 – 899.

The post Accusation breeds guilt appeared first on OUPblog.

0 Comments on Accusation breeds guilt as of 1/11/2015 4:38:00 AM
Add a Comment
4. Why study paradoxes?

Why should you study paradoxes? The easiest way to answer this question is with a story:

In 2002 I was attending a conference on self-reference in Copenhagen, Denmark. During one of the breaks I got a chance to chat with Raymond Smullyan, who is amongst other things an accomplished magician, a distinguished mathematical logician, and perhaps the most well-known popularizer of `Knight and Knave’ (K&K) puzzles.

K&K puzzles involve an imaginary island populated by two tribes: the Knights and the Knaves. Knights always tell the truth, and Knaves always lie (further, members of both tribes are forbidden to engage in activities that might lead to paradoxes or situations that break these rules). Other than their linguistic behavior, there is nothing that distinguishes Knights from Knaves.

Typically, K&K puzzles involve trying to answer questions based on assertions made by, or questions answered by, an inhabitant of the island. For example, a classic K&K puzzle involves meeting an islander at a fork in the road, where one path leads to riches and success and the other leads to pain and ruin. You are allowed to ask the islander one question, after which you must pick a path. Not knowing to which tribe the islander belongs, and hence whether she will lie or tell the truth, what question should you ask?

(Answer: You should ask “Which path would someone from the other tribe say was the one leading to riches and success?”, and then take the path not indicated by the islander).

Back to Copenhagen in 2002: Seizing my chance, I challenged Smullyan with the following K&K puzzle, of my own devising:

There is a nightclub on the island of Knights and Knaves, known as the Prime Club. The Prime Club has one strict rule: the number of occupants in the club must be a prime number at all times.

Pythagoras paradox.png
Pythagoras paradox, by Jan Arkesteijn (own work). Public domain via Wikimedia Commons.

The Prime Club also has strict bouncers (who stand outside the doors and do not count as occupants) enforcing this rule. In addition, a strange tradition has become customary at the Prime Club: Every so often the occupants form a conga line, and sing a song. The first lyric of the song is:

“At least one of us in the club is a Knave.”

and is sung by the first person in the line. The second lyric of the song is:

“At least two of us in the club are Knaves.”

and is sung by the second person in the line. The third person (if there is one) sings:

“At least three of us in the club are Knaves.”

And so on down the line, until everyone has sung a verse.

One day you walk by the club, and hear the song being sung. How many people are in the club?

Smullyan’s immediate response to this puzzle was something like “That can’t be solved – there isn’t enough information”. But he then stood alone in the corner of the reception area for about five minutes, thinking, before returning to confidently (and correctly, of course) answer “Two!”

I won’t spoil things by giving away the solution – I’ll leave that mystery for interested readers to solve on their own. (Hint: if the song is sung with any other prime number of islanders in the club, a paradox results!) I will note that the song is equivalent to a more formal construction involving a list of sentences of the form:

At least one of sentences S1 – Sn is false.

At least two of sentences S1 – Sn is false.

————————————————

At least n of sentences S1 – Sn is false.

The point of this story isn’t to brag about having stumped a famous logician (even for a mere five minutes), although I admit that this episode (not only stumping Smullyan, but meeting him in the first place) is still one of the highlights of my academic career.

Frances MacDonald - A Paradox 1905.jpg
Frances MacDonald – A Paradox 1905, by Frances MacDonald McNair. Public domain via Wikimedia Commons.

Instead, the story, and the puzzle at the center of it, illustrates the reasons why I find paradoxes so fascinating and worthy of serious intellectual effort. The standard story regarding why paradoxes are so important is that, although they are sometimes silly in-and-of-themselves, paradoxes indicate that there is something deeply flawed in our understanding of some basic philosophical notion (truth, in the case of the semantic paradoxes linked to K&K puzzles).

Another reason for their popularity is that they are a lot of fun. Both of these are really good reasons for thinking deeply about paradoxes. But neither is the real reason why I find them so fascinating. The real reason I find paradoxes so captivating is that they are much more mathematically complicated, and as a result much more mathematically interesting, than standard accounts (which typically equate paradoxes with the presence of some sort of circularity) might have you believe.

The Prime Club puzzle demonstrates that whether a particular collection of sentences is or is not paradoxical can depend on all sorts of surprising mathematical properties, such as whether there is an even or odd number of sentences in the collection, or whether the number of sentences in the collection is prime or composite, or all sorts of even weirder and more surprising conditions.

Other examples demonstrate that whether a construction (or, equivalently, a K&K story) is paradoxical can depend on whether the referential relation involved in the construction (i.e. the relation that holds between two sentences if one refers to the other) is symmetric, or is transitive.

The paradoxicality of still another type of construction, involving infinitely many sentences, depends on whether cofinitely many of the sentences each refer to cofinitely many of the other sentences in the construction (a set is cofinite if its complement is finite). And this only scratches the surface!

The more I think about and work on paradoxes, the more I marvel at how complicated the mathematical conditions for generating paradoxes are: it takes a lot more than the mere presence of circularity to generate a mathematical or semantic paradox, and stating exactly what is minimally required is still too difficult a question to answer precisely. And that’s why I work on paradoxes: their surprising mathematical complexity and mathematical beauty. Fortunately for me, there is still a lot of work remains to be done, and a lot of complexity and beauty remaining to be discovered.

The post Why study paradoxes? appeared first on OUPblog.

0 Comments on Why study paradoxes? as of 9/7/2014 5:38:00 AM
Add a Comment
5. Recurring decimals, proof, and ice floes

Why do we teach students how to prove things we all know already, such as 0.9999••• =1?

Partly, of course, so they develop thinking skills to use on questions whose truth-status they won’t know in advance. Another part, however, concerns the dialogue nature of proof: a proof must be not only correct, but also persuasive: and persuasiveness is not objective and absolute, it’s a two-body problem. Not only to tango does one need two.

The statements — (1) ice floats on water, (2) ice is less dense than water — are widely acknowledged as facts and, usually, as interchangeable facts. But although rooted in everyday experience, they are not that experience. We have firstly represented stuffs of experience by sounds English speakers use to stand for them, then represented these sounds by word-processor symbols that, by common agreement, stand for them. Two steps away from reality already! This is what humans do: we invent symbols for perceived realities and, eventually, evolve procedures for manipulating them in ways that mirror how their real-world origins behave. Virtually no communication between two persons, and possibly not much internal dialogue within one mind, can proceed without this. Man is a symbol-using animal.

Seagull via Dreamstime, courtesy of author.
Seagull via Dreamstime, courtesy of author.

Statement (1) counts as fact because folk living in cooler climates have directly observed it throughout history (and conflicting evidence is lacking). Statement (2) is factual in a significantly different sense, arising by further abstraction from (1) and from a million similar experiential observations. Partly to explain (1) and its many cousins, we have conceived ideas like mass, volume, ratio of mass to volume, and explored for generations towards the conclusion that mass-to-volume works out the same for similar materials under similar conditions, and that the comparison of mass-to-volume ratios predicts which materials will float upon others.

Statement (3): 19 is a prime number. In what sense is this a fact? Its roots are deep in direct experience: the hunter-gatherer wishing to share nineteen apples equally with his two brothers or his three sons or his five children must have discovered that he couldn’t without extending his circle of acquaintance so far that each got only one, long before he had a name for what we call ‘nineteen’. But (3) is many steps away from the experience where it is grounded. It involves conceptualisation of numerical measurements of sets one encounters, and millennia of thought to acquire symbols for these and codify procedures for manipulating them in ways that mirror how reality functions. We’ve done this so successfully that it’s easy to forget how far from the tangibles of experience they stand.

Statement (4): √2 is not exactly the ratio of two whole numbers. Most first-year mathematics students know this. But by this stage of abstraction, separating its fact-ness from its demonstration is impossible: the property of being exactly a fraction is not detectable by physical experience. It is a property of how we abstracted and systematised the numbers that proved useful in modelling reality, not of our hands-on experience of reality. The reason we regard √2’s irrationality as factual is precisely because we can give a demonstration within an accepted logical framework.

What then about recurring decimals? For persuasive argument, first ascertain the distance from reality at which the question arises: not, in this case, the rarified atmosphere of undergraduate mathematics but the primary school classroom. Once a child has learned rituals for dividing whole numbers and the convenience of decimal notation, she will try to divide, say, 2 by 3 and will hit a problem. The decimal representation of the answer does not cease to spew out digits of lesser and lesser significance no matter how long she keeps turning the handle. What should we reply when she asks whether zero point infinitely many 6s is or is not two thirds, or even — as a thoughtful child should — whether zero point infinitely many 6s is a legitimate symbol at all?

The answer must be tailored to the questioner’s needs, but the natural way forward — though it took us centuries to make it logically watertight! — is the nineteenth-century definition of sum of an infinite series. For the primary school kid it may suffice to say that, by writing down enough 6s, we’d get as close to 2/3 as we’d need for any practical purpose. For differential calculus we’d need something better, and for model-theoretic discourse involving infinitesimals something better again. Yet the underpinning mathematics for equalities like 0.6666••• = 2/3 where the question arises is the nineteenth-century one. Its fact-ness therefore resembles that of ice being less dense than water, of 19 being prime or of √2 being irrational. It can be demonstrated within a logical framework that systematises our observations of real-world experiences. So it is a fact not about reality but about the models we build to explain reality. Demonstration is the only tool available for establishing its truth.

Mathematics without proof is not like an omelette without salt and pepper; it is like an omelette without egg.

Headline image credit: Floating ice sheets in Antarctica. CC0 via Pixabay.

The post Recurring decimals, proof, and ice floes appeared first on OUPblog.

0 Comments on Recurring decimals, proof, and ice floes as of 10/11/2014 11:19:00 AM
Add a Comment
6. The deconstruction of paradoxes in epidemiology

If a “revolution” in our field or area of knowledge was ongoing, would we feel it and recognize it? And if so, how?

I think a methodological “revolution” is probably going on in the science of epidemiology, but I’m not totally sure. Of course, in science not being sure is part of our normal state. And we mostly like it. I had the feeling that a revolution was ongoing in epidemiology many times. While reading scientific articles, for example. And I saw signs of it, which I think are clear, when reading the latest draft of the forthcoming book Causal Inference by M.A. Hernán and J.M. Robins from Harvard (Chapman & Hall / CRC, 2015). I think the “revolution” — or should we just call it a “renewal”? — is deeply changing how epidemiological and clinical research is conceived, how causal inferences are made, and how we assess the validity and relevance of epidemiological findings. I suspect it may be having an immense impact on the production of scientific evidence in the health, life, and social sciences. If this were so, then the impact would also be large on most policies, programs, services, and products in which such evidence is used. And it would be affecting thousands of institutions, organizations and companies, millions of people.

One example: at present, in clinical and epidemiological research, every week “paradoxes” are being deconstructed. Apparent paradoxes that have long been observed, and whose causal interpretation was at best dubious, are now shown to have little or no causal significance. For example, while obesity is a well-established risk factor for type 2 diabetes (T2D), among people who already developed T2D the obese fare better than T2D individuals with normal weight. Obese diabetics appear to survive longer and to have a milder clinical course than non-obese diabetics. But it is now being shown that the observation lacks causal significance. (Yes, indeed, an observation may be real and yet lack causal meaning.) The demonstration comes from physicians, epidemiologists, and mathematicians like Robins, Hernán, and colleagues as diverse as S. Greenland, J. Pearl, A. Wilcox, C. Weinberg, S. Hernández-Díaz, N. Pearce, C. Poole, T. Lash , J. Ioannidis, P. Rosenbaum, D. Lawlor, J. Vandenbroucke, G. Davey Smith, T. VanderWeele, or E. Tchetgen, among others. They are building methodological knowledge upon knowledge and methods generated by graph theory, computer science, or artificial intelligence. Perhaps one way to explain the main reason to argue that observations as the mentioned “obesity paradox” lack causal significance, is that “conditioning on a collider” (in our example, focusing only on individuals who developed T2D) creates a spurious association between obesity and survival.

1024px-Influenza_virus_research
Influenza virus research by James Gathany for CDC. Public domain via Wikimedia Commons.

The “revolution” is partly founded on complex mathematics, and concepts as “counterfactuals,” as well as on attractive “causal diagrams” like Directed Acyclic Graphs (DAGs). Causal diagrams are a simple way to encode our subject-matter knowledge, and our assumptions, about the qualitative causal structure of a problem. Causal diagrams also encode information about potential associations between the variables in the causal network. DAGs must be drawn following rules much more strict than the informal, heuristic graphs that we all use intuitively. Amazingly, but not surprisingly, the new approaches provide insights that are beyond most methods in current use. In particular, the new methods go far deeper and beyond the methods of “modern epidemiology,” a methodological, conceptual, and partly ideological current whose main eclosion took place in the 1980s lead by statisticians and epidemiologists as O. Miettinen, B. MacMahon, K. Rothman, S. Greenland, S. Lemeshow, D. Hosmer, P. Armitage, J. Fleiss, D. Clayton, M. Susser, D. Rubin, G. Guyatt, D. Altman, J. Kalbfleisch, R. Prentice, N. Breslow, N. Day, D. Kleinbaum, and others.

We live exciting days of paradox deconstruction. It is probably part of a wider cultural phenomenon, if you think of the “deconstruction of the Spanish omelette” authored by Ferran Adrià when he was the world-famous chef at the elBulli restaurant. Yes, just kidding.

Right now I cannot find a better or easier way to document the possible “revolution” in epidemiological and clinical research. Worse, I cannot find a firm way to assess whether my impressions are true. No doubt this is partly due to my ignorance in the social sciences. Actually, I don’t know much about social studies of science, epistemic communities, or knowledge construction. Maybe this is why I claimed that a sociology of epidemiology is much needed. A sociology of epidemiology would apply the scientific principles and methods of sociology to the science, discipline, and profession of epidemiology in order to improve understanding of the wider social causes and consequences of epidemiologists’ professional and scientific organization, patterns of practice, ideas, knowledge, and cultures (e.g., institutional arrangements, academic norms, scientific discourses, defense of identity, and epistemic authority). It could also address the patterns of interaction of epidemiologists with other branches of science and professions (e.g. clinical medicine, public health, the other health, life, and social sciences), and with social agents, organizations, and systems (e.g. the economic, political, and legal systems). I believe the tradition of sociology in epidemiology is rich, while the sociology of epidemiology is virtually uncharted (in the sense of not mapped neither surveyed) and unchartered (i.e. not furnished with a charter or constitution).

Another way I can suggest to look at what may be happening with clinical and epidemiological research methods is to read the changes that we are witnessing in the definitions of basic concepts as risk, rate, risk ratio, attributable fraction, bias, selection bias, confounding, residual confounding, interaction, cumulative and density sampling, open population, test hypothesis, null hypothesis, causal null, causal inference, Berkson’s bias, Simpson’s paradox, frequentist statistics, generalizability, representativeness, missing data, standardization, or overadjustment. The possible existence of a “revolution” might also be assessed in recent and new terms as collider, M-bias, causal diagram, backdoor (biasing path), instrumental variable, negative controls, inverse probability weighting, identifiability, transportability, positivity, ignorability, collapsibility, exchangeable, g-estimation, marginal structural models, risk set, immortal time bias, Mendelian randomization, nonmonotonic, counterfactual outcome, potential outcome, sample space, or false discovery rate.

You may say: “And what about textbooks? Are they changing dramatically? Has one changed the rules?” Well, the new generation of textbooks is just emerging, and very few people have yet read them. Two good examples are the already mentioned text by Hernán and Robins, and the soon to be published by T. VanderWeele, Explanation in causal inference: Methods for mediation and interaction (Oxford University Press, 2015). Clues can also be found in widely used textbooks by K. Rothman et al. (Modern Epidemiology, Lippincott-Raven, 2008), M. Szklo and J Nieto (Epidemiology: Beyond the Basics, Jones & Bartlett, 2014), or L. Gordis (Epidemiology, Elsevier, 2009).

Finally, another good way to assess what might be changing is to read what gets published in top journals as Epidemiology, the International Journal of Epidemiology, the American Journal of Epidemiology, or the Journal of Clinical Epidemiology. Pick up any issue of the main epidemiologic journals and you will find several examples of what I suspect is going on. If you feel like it, look for the DAGs. I recently saw a tweet saying “A DAG in The Lancet!”. It was a surprise: major clinical journals are lagging behind. But they will soon follow and adopt the new methods: the clinical relevance of the latter is huge. Or is it not such a big deal? If no “revolution” is going on, how are we to know?

Feature image credit: Test tubes by PublicDomainPictures. Public Domain via Pixabay.

The post The deconstruction of paradoxes in epidemiology appeared first on OUPblog.

0 Comments on The deconstruction of paradoxes in epidemiology as of 1/1/1900
Add a Comment
7. What do rumors, diseases, and memes have in common?

Are you worried about catching the flu, or perhaps even Ebola? Just how worried should you be? Well, that depends on how fast a disease will spread over social and transportation networks, so it’s obviously important to obtain good estimates of the speed of disease transmission and to figure out good containment strategies to combat disease spread.

Diseases, rumors, memes, and other information all spread over networks. A lot of research has explored the effects of network structure on such spreading. Unfortunately, most of this research has a major issue: it considers networks that are not realistic enough, and this can lead to incorrect predictions of transmission speeds, which people are most important in a network, and so on. So how does one address this problem?

Traditionally, most studies of propagation on networks assume a very simple network structure that is static and only includes one type of connection between people. By contrast, real networks change in time  one contacts different people during weekdays and on weekends, one (hopefully) stays home when one is sick, new University students arrive from all parts of the world every autumn to settle into new cities. They also include multiple types of social ties (Facebook, Twitter, and – gasp – even face-to-face friendships), multiple modes of transportation, and so on. That is, we consume and communicate information through all sorts of channels. To consider a network with only one type of social tie ignores these facts and can potentially lead to incorrect predictions of which memes go viral and how fast information spreads. It also fails to allow differentiation between people who are important in one medium from people who are important in a different medium (or across multiple media). In fact, most real networks include a far richer “multilayer” structure. Collapsing such structures to obtain and then study a simpler network representation can yield incorrect answers for how fast diseases or ideas spread, the robustness level of infrastructures, how long it takes for interaction oscillators to synchronize, and more.

mobile-phone-426559_640
Image credit: Mobile Phone, by geralt. Public domain via Pixabay.

Recently, an increasingly large number of researchers are studying mathematical objects called “multilayer networks”. These generalize ordinary networks and allow one to incorporate time-dependence, multiple modes of connection, and other complexities. Work on multilayer networks dates back many decades in fields like sociology and engineering, and of course it is well-known that networks don’t exist in isolation but rather are coupled to other networks. The last few years have seen a rapid explosion of new theoretical tools to study multilayer networks.

And what types of things do researchers need to figure out? For one thing, it is known that multilayer structures induce correlations that are invisible if one collapses multilayer networks into simpler representations, so it is essential to figure out when and by how much such correlations increase or decrease the propagation of diseases and information, how they change the ability of oscillators to synchronize, and so on. From the standpoint of theory, it is necessary to develop better methods to measure multilayer structures, as a large majority of the tools that have been used thus far to study multilayer networks are mostly just more complicated versions of existing diagnostic and models. We need to do better. It is also necessary to systematically examine the effects of multilayer structures, such as correlations between different layers (e.g., perhaps a person who is important for the social network that is encapsulated in one layer also tends to be important in other layers?), on different types of dynamical processes. In these efforts, it is crucial to consider not only simplistic (“toy”) models — as in most of the work on multilayer networks thus far — but to move the field towards the examination of ever more realistic and diverse models and to estimate the parameters of these models from empirical data. As our review article illustrates, multilayer networks are both exciting and important to study, but the increasingly large community that is studying them still has a long way to go. We hope that our article will help steer these efforts, which promise to be very fruitful.

The post What do rumors, diseases, and memes have in common? appeared first on OUPblog.

0 Comments on What do rumors, diseases, and memes have in common? as of 11/3/2014 3:19:00 AM
Add a Comment
8. Celebrating Alan Turing

Alan Mathison Turing (1912-1954) was a mathematician and computer scientist, remembered for his revolutionary Automatic Computing Engine, on which the first personal computer was based, and his crucial role in breaking the ENIGMA code during the Second World War. He continues to be regarded as one of the greatest scientists of the 20th century.

We live in an age that Turing both predicted and defined. His life and achievements are starting to be celebrated in popular culture, largely with the help of the newly released film The Imitation Game, starring Benedict Cumberbatch as Turing and Keira Knightley as Joan Clarke. We’re proud to publish some of Turing’s own work in mathematics, computing, and artificial intelligence, as well as numerous explorations of his life and work. Use our interactive Enigma Machine below to learn more about Turing’s extraordinary achievements.

 

Image credits: (1) Bletchley Park Bombe by Antoine Taveneaux. CC-BY-SA-3.0 via Wikimedia Commons. (2) Alan Turing Aged 16, Unknown Artist. Public domain via Wikimedia Commons. (3) Good question by Garrett Coakley. CC-BY-SA 2.0 via Flickr

The post Celebrating Alan Turing appeared first on OUPblog.

0 Comments on Celebrating Alan Turing as of 1/1/1900
Add a Comment
9. A very short trivia quiz

In order to celebrate Trivia Day, we have put together a quiz with questions chosen at random from Very Short Introductions online. This is the perfect quiz for those who know a little about a lot. The topics range from Geopolitics to Happiness, and from French Literature to Mathematics. Do you have what it takes to take on this very short trivia quiz and become a trivia master? Take the quiz to find out…

Your Score:  

Your Ranking:  

We hope you enjoyed testing your trivia knowledge in this very short quiz.

Headline image credit: Pondering Away. © GlobalStock  via iStock Photo.

The post A very short trivia quiz appeared first on OUPblog.

0 Comments on A very short trivia quiz as of 1/8/2015 1:01:00 AM
Add a Comment
10. Mathematics: An Illustrated History of Numbers (Ponderables)


Edited by:  Tom Jackson
Publisher: Shelter Harbor Press
Genre: Mathematics / History
ISBN: 978-0-9853230-4-2
Page: 168
Price: $24.95

Website
Buy it at Amazon

Do you remember with fondness the thrill of mathematical discovery? Your first geometry proof, using pi to calculate areas of circles, the imaginary number i, Pascal’s Triangle, and the Fibonacci Sequence may be distant memories, but the concepts still intrigue you. If you’re a math geek like I am, reading the Ponderables Illustrated History of Numbers is the perfect way to capture the joy you once felt.

Mathematical principles have not always been known. They developed throughout the ages by some of the masterminds of the sciences. In this illustrated history, we learn who was responsible for significant discoveries, and how they came about. One hundred “Ponderables” are presented for our enjoyment and enlightenment.

I have to admit, I am very biased in reviewing this book. I have always loved mathematics. Its perfect logic, symmetry, and order have been constant companions for me. And if you also love this exact science, you’ll love this book. I felt like I had taken a trip back to my favorite high school math classes. This is the perfect gift for any serious math student or your favorite math teacher. And if math isn’t your favorite subject, look for Ponderables in Chemistry, Space, Physics, Philosophy, and Computing.

Reviewer: Alice Berger


1 Comments on Mathematics: An Illustrated History of Numbers (Ponderables), last added: 11/11/2012
Display Comments Add a Comment
11. What sort of science do we want?

By Robyn Arianrhod


29 November 2012 is the 140th anniversary of the death of mathematician Mary Somerville, the nineteenth century’s “Queen of Science”. Several years after her death, Oxford University’s Somerville College was named in her honor — a poignant tribute because Mary Somerville had been completely self-taught. In 1868, when she was 87, she had signed J. S. Mill’s (unsuccessful) petition for female suffrage, but I think she’d be astonished that we’re still debating “the woman question” in science. Physics, in particular — a subject she loved, especially mathematical physics — is still a very male-dominated discipline, and men as well as women are concerned about it.

Of course, science today is far more complex than it was in Somerville’s time, and for the past forty years feminist critics have been wondering if it’s the kind of science that women actually want; physics, in particular, has improved the lives of millions of people over the past 300 years, but it’s also created technologies and weapons that have caused massive human, social and environmental destruction. So I’d like to revisit an old debate: are science’s obstacles for women simply a matter of managing its applications in a more “female-friendly” way, or is there something about its exclusively male origins that has made science itself sexist?

To manage science in a more female-friendly way, it would be interesting to know if there’s any substance behind gender stereotypes such as that women prefer to solve immediate human problems, and are less interested than men in detached, increasingly expensive fundamental research, and in military and technological applications. Either way, though, it’s self-evident that women should have more say in how science is applied and funded, which means it’s important to have more women in decision-making positions — something we’re still far from achieving.

But could the scientific paradigm itself be alienating to women? Mary Somerville didn’t think so, but it’s often argued (most recently by some eco-feminist and post-colonial critics) that the seventeenth-century Scientific Revolution, which formed the template for modern science, was constructed by European men, and that consequently, the scientific method reflects a white, male way of thinking that inherently preferences white men’s interests and abilities over those of women and non-Westerners. It’s a problematic argument, but justification for it has included an important critique of reductionism — namely, that Western male experimental scientists have traditionally studied physical systems, plants, and even human bodies by dissecting them, studying their components separately and losing sight of the whole system or organism.

The limits of the reductionist philosophy were famously highlighted in biologist Rachel Carson’s book, Silent Spring, which showed that the post-War boom in chemical pest control didn’t take account of the whole food chain, of which insects are merely a part. Other dramatic illustrations are climate change, and medical disasters like the thalidomide tragedy: clearly, it’s no longer enough to focus selectively on specific problems such as the action of a drug on a particular symptom, or the local effectiveness of specific technologies; instead, scientists must consider the effect of a drug or medical procedure on the whole person, whilst new technological inventions shouldn’t be separated from their wider social and environmental ramifications.

In its proper place, however, reductionism in basic scientific research is important. (The recent infamous comment by American Republican Senate nominee Todd Akin — that women can “shut down” their bodies during a “legitimate rape”, in order not to become pregnant — illustrates the need for a basic understanding of how the various parts of the human body work.) I’m not sure if this kind of reductionism is a particularly male or particularly Western way of thinking, but either way there’s much more to the scientific method than this; it’s about developing testable hypotheses from observations (reductionist or holistic), and then testing those hypotheses in as objective a way as possible. The key thing in observing the world is curiosity, and this is a human trait, discernible in all children, regardless of race or gender. Of course, girls have traditionally faced more cultural restraints than boys, so perhaps we still need to encourage girls to be actively curious about the world around them. (For instance, it’s often suggested that women prefer biology to physics because they want to help people — and yet, many of the recent successes in medical and biological science would have been impossible without the technology provided by fundamental, curiosity-driven physics.)

Like Mary Somerville, I think the scientific method has universal appeal, but I also think feminist and other critics are right to question its patriarchal and capitalist origins. Although science at its best is value-free, it’s part of the broader community, whose values are absorbed by individual scientists. So much so that Yale researchers Moss-Racusin et al recently uncovered evidence that many scientists themselves, male and female, have an unconscious sexist bias. In their widely reported study, participants judged the same job application (for a lab manager position) to be less competent if it had a (randomly assigned) female name than if it had a male name.

In Mary Somerville’s day, such bias was overt, and it had the authority of science itself: women’s smaller brain size was considered sufficient to “prove” female intellectual inferiority. It was bad science, and it shows how patriarchal perceptions can skew the interpretation not just of women’s competence, but also of scientific data itself. (Without proper vigilance, this kind of subjectivity can slip through the safeguards of the scientific method because of other prejudices, too, such as racism, or even the agendas of funding bodies.) Of course, acknowledging the existence of patriarchal values in society isn’t about hating men or assuming men hate women. Mary Somerville met with “the utmost kindness” from individual scientific men, but that didn’t stop many of them from seeing her as the exception that proved the male-created rule of female inferiority. After all, it takes analysis and courage to step outside a long-accepted norm. And so, the “woman question” is still with us — but in trying to resolve it, we might not only find ways to remove existing gender biases, but also broaden the conversation about what sort of science we all want in the twenty-first century.

Robyn Arianrhod is an Honorary Research Associate in the School of Mathematical Sciences at Monash University. She is the author of Seduced by Logic: Émilie Du Châtelet, Mary Somerville and the Newtonian Revolution and Einstein’s Heroes.

Subscribe to the OUPblog via email or RSS.
Subscribe to only science and medicine articles on the OUPblog via email or RSS.
View more about this book on the

Image credit: Mary Somerville. Public domain via Wikimedia Commons.

0 Comments on What sort of science do we want? as of 11/30/2012 6:45:00 PM
Add a Comment
12. Summing up Alan Turing

By Jack Copeland


Three words to sum up Alan Turing? Humour. He had an impish, irreverent and infectious sense of humour. Courage. Isolation. He loved to work alone. Reading his scientific papers, it is almost as though the rest of the world — the busy community of human minds working away on the same or related problems — simply did not exist. Turing was determined to do it his way. Three more words? A patriot. Unconventional — he was uncompromisingly unconventional, and he didn’t much care what other people thought about his unusual methods. A genius. Turing’s brilliant mind was sparsely furnished, though. He was a Spartan in all things, inner and outer, and had no time for pleasing décor, soft furnishings, superfluous embellishment, or unnecessary words. To him what mattered was the truth. Everything else was mere froth. He succeeded where a better furnished, wordier, more ornate mind might have failed. Alan Turing changed the world.

What would it have been like to meet him? Turing was tallish (5 feet 10 inches) and broadly built. He looked strong and fit. You might have mistaken his age, as he always seemed younger than he was. He was good looking, but strange. If you came across him at a party you would notice him all right. In fact you might turn round and say “Who on earth is that?” It wasn’t just his shabby clothes or dirty fingernails. It was the whole package. Part of it was the unusual noise he made. This has often been described as a stammer, but it wasn’t. It was his way of preventing people from interrupting him, while he thought out what he was trying to say. Ah – Ah – Ah – Ah – Ah. He did it loudly.

If you crossed the room to talk to him, you’d probably find him gauche and rather reserved. He was decidedly lah-di-dah, but the reserve wasn’t standoffishness. He was a man of few words, shy. Polite small talk did not come easily to him. He might if you were lucky smile engagingly, his blue eyes twinkling, and come out with something quirky that would make you laugh. If conversation developed you’d probably find him vivid and funny. He might ask you, in his rather high-pitched voice, whether you think a computer could ever enjoy strawberries and cream, or could make you fall in love with it. Or he might ask if you can say why a face is reversed left to right in a mirror but not top to bottom.

Once you got to know him Turing was fun — cheerful, lively, stimulating, comic, brimming with boyish enthusiasm. His raucous crow-like laugh pealed out boisterously. But he was also a loner. “Turing was always by himself,” said codebreaker Jerry Roberts: “He didn’t seem to talk to people a lot, although with his own circle he was sociable enough.” Like everyone else Turing craved affection and company, but he never seemed to quite fit in anywhere. He was bothered by his own social strangeness — although, like his hair, it was a force of nature he could do little about. Occasionally he could be very rude. If he thought that someone wasn’t listening to him with sufficient attention he would simply walk away. Turing was the sort of man who, usually unintentionally, ruffled people’s feathers — especially pompous people, people in authority, and scientific poseurs. He was moody too. His assistant at the National Physical Laboratory, Jim Wilkinson, recalled with amusement that there were days when it was best just to keep out of Turing’s way. Beneath the cranky, craggy, irreverent exterior there was an unworldly innocence though, as well as sensitivity and modesty.

Turing died at the age of only 41. His ideas lived on, however, and at the turn of the millennium Time magazine listed him among the twentieth century’s 100 greatest minds, alongside the Wright brothers, Albert Einstein, DNA busters Crick and Watson, and the discoverer of penicillin, Alexander Fleming. Turing’s achievements during his short life were legion. Best known as the man who broke some of Germany’s most secret codes during the war of 1939-45, Turing was also the father of the modern computer. Today, all who click, tap or touch to open are familiar with the impact of his ideas. To Turing we owe the brilliant innovation of storing applications, and all the other programs necessary for computers to do our bidding, inside the computer’s memory, ready to be opened when we wish. We take for granted that we use the same slab of hardware to shop, manage our finances, type our memoirs, play our favourite music and videos, and send instant messages across the street or around the world. Like many great ideas this one now seems as obvious as the wheel and the arch, but with this single invention — the stored-program universal computer — Turing changed the way we live. His universal machine caught on like wildfire; today personal computer sales hover around the million a day mark. In less than four decades, Turing’s ideas transported us from an era where ‘computer’ was the term for a human clerk who did the sums in the back office of an insurance company or science lab, into a world where many young people have never known life without the Internet.

B. Jack Copeland is the Director of the Turing Archive for the History of Computing, and author of Turing: Pioneer of the Information AgeAlan Turing’s Electronic Brain, and Colossus. He is the editor of The Essential Turing. Read the new revelations about Turing’s death after Copeland’s investigation into the inquest.

Visit the Turing hub on the Oxford University Press UK website for the latest news in theCentenary year. Read our previous posts on Alan Turing including: “Maurice Wilkes on Alan Turing” by Peter J. Bentley, “Turing : the irruption of Materialism into thought” by Paul Cockshott, “Alan Turing’s Cryptographic Legacy” by Keith M. Martin, and “Turing’s Grand Unification” by Cristopher Moore and Stephan Mertens, “Computers as authors and the Turing Test” by Kees van Deemter, and “Alan Turing, Code-Breaker” by Jack Copeland.

For more information about Turing’s codebreaking work, and to view digital facsimiles of declassified wartime ‘Ultra’ documents, visit The Turing Archive for the History of Computing. There is also an extensive photo gallery of Turing and his war at www.the-turing-web-book.com.

Subscribe to the OUPblog via email or RSS.
Subscribe to only British history articles on the OUPblog via email or RSS.
View more about this book on the  

0 Comments on Summing up Alan Turing as of 11/30/2012 6:43:00 PM
Add a Comment
13. Celebrating Newton, 325 years after Principia

By Robyn Arianrhod


This year, 2012, marks the 325th anniversary of the first publication of the legendary Principia (Mathematical Principles of Natural Philosophy), the 500-page book in which Sir Isaac Newton presented the world with his theory of gravity. It was the first comprehensive scientific theory in history, and it’s withstood the test of time over the past three centuries.

Unfortunately, this superb legacy is often overshadowed, not just by Einstein’s achievement but also by Newton’s own secret obsession with Biblical prophecies and alchemy. Given these preoccupations, it’s reasonable to wonder if he was quite the modern scientific guru his legend suggests, but personally I’m all for celebrating him as one of the greatest geniuses ever. Although his private obsessions were excessive even for the seventeenth century, he was well aware that in eschewing metaphysical, alchemical, and mystical speculation in his Principia, he was creating a new way of thinking about the fundamental principles underlying the natural world. To paraphrase Newton himself, he changed the emphasis from metaphysics and mechanism to experiment and mathematical analogy. His method has proved astonishingly fruitful, but initially it was quite controversial.

He had developed his theory of gravity to explain the cause of the mysterious motion of the planets through the sky: in a nutshell, he derived a formula for the force needed to keep a planet moving in its observed elliptical orbit, and he connected this force with everyday gravity through the experimentally derived mathematics of falling motion. Ironically (in hindsight), some of his greatest peers, like Leibniz and Huygens, dismissed the theory of gravity as “mystical” because it was “too mathematical.” As far as they were concerned, the law of gravity may have been brilliant, but it didn’t explain how an invisible gravitational force could reach all the way from the sun to the earth without any apparent material mechanism. Consequently, they favoured the mainstream Cartesian “theory”, which held that the universe was filled with an invisible substance called ether, whose material nature was completely unknown, but which somehow formed into great swirling whirlpools that physically dragged the planets in their orbits.

The only evidence for this vortex “theory” was the physical fact of planetary motion, but this fact alone could lead to any number of causal hypotheses. By contrast, Newton explained the mystery of planetary motion in terms of a known physical phenomenon, gravity; he didn’t need to postulate the existence of fanciful ethereal whirlpools. As for the question of how gravity itself worked, Newton recognized this was beyond his scope — a challenge for posterity — but he knew that for the task at hand (explaining why the planets move) “it is enough that gravity really exists and acts according to the laws that we have set forth and is sufficient to explain all the motions of the heavenly bodies…”

What’s more, he found a way of testing his theory by using his formula for gravitational force to make quantitative predictions. For instance, he realized that comets were not random, unpredictable phenomena (which the superstitious had feared as fiery warnings from God), but small celestial bodies following well-defined orbits like the planets. His friend Halley famously used the theory of gravity to predict the date of return of the comet now named after him. As it turned out, Halley’s prediction was fairly good, although Clairaut — working half a century later but just before the predicted return of Halley’s comet — used more sophisticated mathematics to apply Newton’s laws to make an even more accurate prediction.

Clairaut’s calculations illustrate the fact that despite the phenomenal depth and breadth of Principia, it took a further century of effort by scores of mathematicians and physicists to build on Newton’s work and to create modern “Newtonian” physics in the form we know it today. But Newton had created the blueprint for this science, and its novelty can be seen from the fact that some of his most capable peers missed the point. After all, he had begun the radical process of transforming “natural philosophy” into theoretical physics — a transformation from traditional qualitative philosophical speculation about possible causes of physical phenomena, to a quantitative study of experimentally observed physical effects. (From this experimental study, mathematical propositions are deduced and then made general by induction, as he explained in Principia.)

Even the secular nature of Newton’s work was controversial (and under apparent pressure from critics, he did add a brief mention of God in an appendix to later editions of Principia). Although Leibniz was a brilliant philosopher (and he was also the co-inventor, with Newton, of calculus), one of his stated reasons for believing in the ether rather than the Newtonian vacuum was that God would show his omnipotence by creating something, like the ether, rather than leaving vast amounts of nothing. (At the quantum level, perhaps his conclusion, if not his reasoning, was right.) He also invoked God to reject Newton’s inspired (and correct) argument that gravitational interactions between the various planets themselves would eventually cause noticeable distortions in their orbits around the sun; Leibniz claimed God would have had the foresight to give the planets perfect, unchanging perpetual motion. But he was on much firmer ground when he questioned Newton’s (reluctant) assumption of absolute rather than relative motion, although it would take Einstein to come up with a relativistic theory of gravity.

Einstein’s theory is even more accurate than Newton’s, especially on a cosmic scale, but within its own terms — that is, describing the workings of our solar system (including, nowadays, the motion of our own satellites) — Newton’s law of gravity is accurate to within one part in ten million. As for his method of making scientific theories, it was so profound that it underlies all the theoretical physics that has followed over the past three centuries. It’s amazing: one of the most religious, most mystical men of his age put his personal beliefs aside and created the quintessential blueprint for our modern way of doing science in the most objective, detached way possible. Einstein agreed; he wrote a moving tribute in the London Times in 1919, shortly after astronomers had provided the first experimental confirmation of his theory of general relativity:

“Let no-one suppose, however, that the mighty work of Newton can really be superseded by [relativity] or any other theory. His great and lucid ideas will retain their unique significance for all time as the foundation of our modern conceptual structure in the sphere of [theoretical physics].”

Robyn Arianrhod is an Honorary Research Associate in the School of Mathematical Sciences at Monash University. She is the author of Seduced by Logic: Émilie Du Châtelet, Mary Somerville and the Newtonian Revolution and Einstein’s Heroes. Read her previous blog posts.

Subscribe to the OUPblog via email or RSS.
Subscribe to only science and medicine articles on the OUPblog via email or RSS.

The post Celebrating Newton, 325 years after Principia appeared first on OUPblog.

0 Comments on Celebrating Newton, 325 years after Principia as of 12/26/2012 8:15:00 AM
Add a Comment
14. Alan M. Turing: Centenary Edition

This year marked the centenary of the birth of Alan Mathison Turing; among the many, many commemorative events that occurred during the Alan Turing Year were the reissues of two biographies of AMT. One was Andrew Hodges's extraordinary work Alan Turing: The Enigma. The other was Sara Turing's long-unavailable book about her son, simply titled [...]

0 Comments on Alan M. Turing: Centenary Edition as of 12/29/2012 4:21:00 AM
Add a Comment
15. Memories of undergraduate mathematics

By Lara Alcock


Two contrasting experiences stick in mind from my first year at university.

First, I spent a lot of time in lectures that I did not understand. I don’t mean lectures in which I got the general gist but didn’t quite follow the technical details. I mean lectures in which I understood not one thing from the beginning to the end. I still went to all the lectures and wrote everything down – I was a dutiful sort of student – but this was hardly the ideal learning experience.

Second, at the end of the year, I was awarded first class marks. The best thing about this was that later that evening, a friend came up to me in the bar and said, “Hey Lara, I hear you got a first!” and I was rapidly surrounded by other friends offering enthusiastic congratulations. This was a revelation. I had attended the kind of school at which students who did well were derided rather than congratulated. I was delighted to find myself in a place where success was celebrated.

Looking back, I think that the interesting thing about these two experiences is the relationship between the two. How could I have done so well when I understood so little of so many lectures?

I don’t think that there was a problem with me. I didn’t come out at the very top, but obviously I had the ability and dedication to get to grips with the mathematics. Nor do I think that there was a problem with the lecturers. Like the vast majority of the mathematicians I have met since, my lecturers cared about their courses and put considerable effort into giving a logically coherent presentation. Not all were natural entertainers, but there was nothing fundamentally wrong with their teaching.

I now think that the problems were more subtle, and related to two issues in particular.

First, there was a communication gap: the lecturers and I did not understand mathematics in the same way. Mathematicians understand mathematics as a network of axioms, definitions, examples, algorithms, theorems, proofs, and applications.  They present and explain these, hoping that students will appreciate the logic of the ideas and will think about the ways in which they can be combined. I didn’t really know how to learn effectively from lectures on abstract material, and research indicates that I was pretty typical in this respect.

Students arrive at university with a set of expectations about what it means to ‘do mathematics’ – about what kind of information teachers will provide and about what students are supposed to do with it. Some of these expectations work well at school but not at university. Many students need to learn, for instance, to treat definitions as stipulative rather than descriptive, to generate and check their own examples, to interpret logical language in a strict, mathematical way rather than a more flexible, context-influenced way, and to infer logical relationships within and across mathematical proofs. These things are expected, but often they are not explicitly taught.

My second problem was that I didn’t have very good study skills. I wasn’t terrible – I wasn’t lazy, or arrogant, or easily distracted, or unwilling to put in the hours. But I wasn’t very effective in deciding how to spend my study time. In fact, I don’t remember making many conscious decisions about it at all. I would try a question, find it difficult, stare out of the window, become worried, attempt to study some section of my lecture notes instead, fail at that too, and end up discouraged. Again, many students are like this. I have met a few who probably should have postponed university until they were ready to exercise some self-discipline, but most do want to learn.

What they lack is a set of strategies for managing their learning – for deciding how to distribute their time when no-one is checking what they’ve done from one class to the next, and for maintaining momentum when things get difficult. Many could improve their effectiveness by doing simple things like systematically prioritizing study tasks, and developing a routine in which they study particular subjects in particular gaps between lectures.  Again, the responsibility for learning these skills lies primarily with the student.

Personally, I never got to a point where I understood every lecture. But I learned how to make sense of abstract material, I developed strategies for studying effectively, and I maintained my first class marks. What I would now say to current students is this: take charge. Find out what lecturers and tutors are expecting, and take opportunities to learn about good study habits. Students who do that should find, like I did, that undergraduate mathematics is challenging, but a pleasure to learn.

Lara Alcock is a Senior Lecturer in the Mathematics Education Centre at Loughborough University. She has taught both mathematics and mathematics education to undergraduates and postgraduates in the UK and the US. She conducts research on the ways in which undergraduates and mathematicians learn and think about mathematics, and she was recently awarded the Selden Prize for Research in Undergraduate Mathematics Education. She is the author of How to Study for a Mathematics Degree (2012, UK) and How to Study as a Mathematics Major (2013, US).

Subscribe to the OUPblog via email or RSS.
Subscribe to only mathematics articles on the OUPblog via email or RSS.
Subscribe to only education articles on the OUPblog via email or RSS.
Image credit: Screenshot of Oxford English Dictionary definition of mathematics, n., via OED Online. All rights reserved.

The post Memories of undergraduate mathematics appeared first on OUPblog.

0 Comments on Memories of undergraduate mathematics as of 1/16/2013 3:56:00 AM
Add a Comment
16. review – Ducklings in a Row by Renee Heiss

Ducklings in a Row by Renee Heiss illustrated by Matthew B. Holcomb Character Publishing 4 Star . Back Cover:  When Mama Duck asks her ducklings to arrange themselves from One to Ten, the baby ducks learn much more than sequencing skills. In Ducklings in a Row, ten unique duckling personalities combine to gorm a humorous …

Add a Comment
17. Statistics and big data

vsi

By David J. Hand


Nowadays it appears impossible to open a newspaper or switch on the television without hearing about “big data”. Big data, it sometimes seems, will provide answers to all the world’s problems. Management consulting company McKinsey, for example, promises “a tremendous wave of innovation, productivity, and growth … all driven by big data”.

An alien observer visiting the Earth might think it represents a major scientific breakthrough. Google Trends shows references to the phrase bobbing along at about one per week until 2011, at which point there began a dramatic, steep, and almost linear increase in references to the phrase. It’s as if no one had thought of it until 2011. Which is odd because data mining, the technology of extracting valuable, useful, or interesting information from large data sets, has been around for some 20 years. And statistics, which lies at the heart of all of this, has been around as a formal discipline for a century or more.

Or perhaps it’s not so odd. If you look back to the beginning of data mining, you find a very similar media enthusiasm for the advances it was going to bring, the breakthroughs in understanding, the sudden discoveries, the deep insights. In fact it almost looks as if we have been here before. All of this leads one to suspect that there’s less to the big data enthusiasm than meets the eye. That it’s not so much a sudden change in our technical abilities as a sudden media recognition of what data scientists, and especially statisticians, are capable.

Of course, I’m not saying that the increasing size of data sets does not lead to promising new opportunities – though I would question whether it’s the “large” that really matters as much as the novelty of the data sets. The tremendous economic impact of GPS data (estimated to be $150-270bn per year), retail transaction data, or genomic and bioinformatics data arise not from the size of these data sets, but from the fact that they provide new kinds of information. And while it’s true that a massive mountain of data needed to be explored to detect the Higgs boson, the core aspect was the nature of the data rather than its amount.

Moreover, if I’m honest, I also have to admit that it’s not solely statistics which leads to the extraction of value from these massive data sets. Often it’s a combination of statistical inferential methods (e.g. determining an accurate geographical location from satellite signals) along with data manipulation algorithms for search, matching, sorting and so on. How these two aspects are balanced depends on the particular application. Locating a shop which stocks that out of print book is less of an inferential statistical problem and more of a search issue. Determining the riskiness of a company seeking a loan owes little to search but much to statistics.

Diagram of Total Information Awareness system designed by the Information Awareness Office

Diagram of Total Information Awareness system designed by the Information Awareness Office

Some time after the phrase “data mining” hit the media, it suffered a backlash. Predictably enough, much of this was based around privacy concerns. A paradigmatic illustration was the Total Information Awareness project in the United States. Its basic aim was to search for suspicious behaviour patterns within vast amounts of personal data, to identify individuals likely to commit crimes, especially terrorist offences. It included data on web browsing, credit card transactions, driving licences, court records, passport details, and so on. After concerns were raised, it was suspended in 2003 (though it is claimed that the software continued to be used by various agencies). As will be evident from recent events, concerns about the security agencies monitoring of the public continues.

The key question is whether proponents of the huge potential of big data and its allied notion of open data are learning from the past. Recent media concern in the UK about merging of family doctor records with hospital records, leading to a six-month delay in the launch of the project, illustrates the danger. Properly informed debate about the promise and the risks is vital.

Technology is amoral — neither intrinsically moral nor immoral. Morality lies in the hands of those who wield it. This is as true of big data technology as it is of nuclear technology and biotechnology. It is abundantly clear — if only from the examples we have already seen — that massive data sets do hold substantial promise for enhancing the well-being of mankind, but we must be aware of the risks. A suitable balance must be struck.

It’s also important to note that the mere existence of huge data files is of itself of no benefit to anyone. For these data sets to be beneficial, it’s necessary to be able to use the data to build models, to estimate effect sizes, to determine if an observed effect should be regarded as mere chance variation, to be sure it’s not a data quality issue, and so on. That is, statistical skills are critical to making use of the big data resources. In just the same way that vast underground oil reserves were useless without the technology to turn them into motive power, so the vast collections of data are useless without the technology to analyse them. Or, as I sometimes put it, people don’t want data, what they want are answers. And statistics provides the tools for finding those answers.

David J. Hand is Professor of Statistics at Imperial College, London and author of Statistics: A Very Short Introduction

The Very Short Introductions (VSI) series combines a small format with authoritative analysis and big ideas for hundreds of topic areas. Written by our expert authors, these books can change the way you think about the things that interest you and are the perfect introduction to subjects you previously knew nothing about. Grow your knowledge with OUPblog and the VSI series every Friday and like Very Short Introductions on Facebook. Subscribe to on Very Short Introductions articles on the OUPblog via email or RSS.

Subscribe to the OUPblog via email or RSS.
Subscribe to only mathematics articles on the OUPblog via email or RSS
Image credit: Diagram of Total Information Awareness system designed by the Information Awareness Office. Public domain via Wikimedia Commons

The post Statistics and big data appeared first on OUPblog.

0 Comments on Statistics and big data as of 5/2/2014 11:17:00 AM
Add a Comment
18. For Those Of Us Who Think We Don’t Like Math

Math is on my mind lately as I wrap up the Parallelogram series. (Yes, Dear Readers, Book 4 is coming! There are just so many words.) I, like my main character Audie in the series, enjoy quantum physics but do not enjoy the math. Or, to put it less charitably, cannot do the math.

But I can’t help wondering if I would have had a completely different attitude toward math in school if I’d had a teacher like this. Or at least seen a demonstration like this. Because there’s no doubt Arthur Benjamin makes math FUN. (Although no matter how fun it is, I still think there’s no way mere mortals could do what he does.)

Enjoy!

7 Comments on For Those Of Us Who Think We Don’t Like Math, last added: 5/8/2014
Display Comments Add a Comment
19. Rebooting Philosophy

By Luciano Floridi


When we use a computer, its performance seems to degrade progressively. This is not a mere impression. An old version of Firefox, the free Web browser, was infamous for its “memory leaks”: it would consume increasing amounts of memory to the detriment of other programs. Bugs in the software actually do slow down the system. We all know what the solution is: reboot. We restart the computer, the memory is reset, and the performance is restored, until the bugs slow it down again.

Philosophy is a bit like a computer with a memory leak. It starts well, dealing with significant and serious issues that matter to anyone. Yet, in time, its very success slows it down. Philosophy begins to care more about philosophers’ questions than philosophical ones, consuming increasing amount of intellectual attention. Scholasticism is the ultimate freezing of the system, the equivalent of Windows’ “blue screen of death”; so many resources are devoted to internal issues that no external input can be processed anymore, and the system stops. The world may be undergoing a revolution, but the philosophical discourse remains detached and utterly oblivious. Time to reboot the system.

Philosophical “rebooting” moments are rare. They are usually prompted by major transformations in the surrounding reality. Since the nineties, I have been arguing that we are witnessing one of those moments. It now seems obvious, even to the most conservative person, that we are experiencing a turning point in our history. The information revolution is profoundly changing every aspect of our lives, quickly and relentlessly. The list is known but worth recalling: education and entertainment, communication and commerce, love and hate, politics and conflicts, culture and health, … feel free to add your preferred topics; they are all transformed by technologies that have the recording and processing of information as their core functions. Meanwhile, philosophy is degrading into self-referential discussions on irrelevancies.

The result of a philosophical rebooting today can only be beneficial. Digital technologies are not just tools merely modifying how we deal with the world, like the wheel or the engine. They are above all formatting systems, which increasingly affect how we understand the world, how we relate to it, how we see ourselves, and how we interact with each other.

The ‘Fourth Revolution’ betrays what I believe to be one of the topics that deserves our full intellectual attention today. The idea is quite simple. Three scientific revolutions have had great impact on how we see ourselves. In changing our understanding of the external world they also modified our self-understanding. After the Copernican revolution, the heliocentric cosmology displaced the Earth and hence humanity from the centre of the universe. The Darwinian revolution showed that all species of life have evolved over time from common ancestors through natural selection, thus displacing humanity from the centre of the biological kingdom. And following Freud, we acknowledge nowadays that the mind is also unconscious. So we are not immobile, at the centre of the universe, we are not unnaturally separate and diverse from the rest of the animal kingdom, and we are very far from being minds entirely transparent to ourselves. One may easily question the value of this classic picture. After all, Freud was the first to interpret these three revolutions as part of a single process of reassessment of human nature and his perspective was blatantly self-serving. But replace Freud with cognitive science or neuroscience, and we can still find the framework useful to explain our strong impression that something very significant and profound has recently happened to our self-understanding.

Since the fifties, computer science and digital technologies have been changing our conception of who we are. In many respects, we are discovering that we are not standalone entities, but rather interconnected informational agents, sharing with other biological agents and engineered artefacts a global environment ultimately made of information, the infosphere. If we need a champion for the fourth revolution this should definitely be Alan Turing.

The fourth revolution offers a historical opportunity to rethink our exceptionalism in at least two ways. Our intelligent behaviour is confronted by the smart behaviour of engineered artefacts, which can be adaptively more successful in the infosphere. Our free behaviour is confronted by the predictability and manipulability of our choices, and by the development of artificial autonomy. Digital technologies sometimes seem to know more about our wishes than we do. We need philosophy to make sense of the radical changes brought about by the information revolution. And we need it to be at its best, for the difficulties we are facing are challenging. Clearly, we need to reboot philosophy now.

Luciano Floridi is Professor of Philosophy and Ethics of Information at the University of Oxford, Senior Research Fellow at the Oxford Internet Institute, and Fellow of St Cross College, Oxford. He was recently appointed as ethics advisor to Google. His most recent book is The Fourth Revolution: How the Infosphere is Reshaping Human Reality.

Subscribe to the OUPblog via email or RSS.
Subscribe to only philosophy articles on the OUPblog via email or RSS.
Image credit: Alan Turing Statue at Bletchley Park. By Ian Petticrew. CC-BY-SA-2.0 via Wikimedia Commons.

The post Rebooting Philosophy appeared first on OUPblog.

0 Comments on Rebooting Philosophy as of 7/12/2014 4:25:00 AM
Add a Comment
20. A Fields Medal reading list

One of the highest points of the International Congress of Mathematicians, currently underway in Seoul, Korea, is the announcement of the Fields Medal prize winners. The prize is awarded every four years to up to four mathematicians under the age of 40, and is viewed as one of the highest honours a mathematician can receive.

This year sees the first ever female recipient of the Fields Medal, Maryam Mirzakhani, recognised for her highly original contributions to geometry and dynamical systems. Her work bridges several mathematic disciplines – hyperbolic geometry, complex analysis, topology, and dynamics – and influences them in return.

We’re absolutely delighted for Professor Mirzakhani, who serves on the editorial board for International Mathematics Research Notices. To celebrate the achievements of all of the winners, we’ve put together a reading list of free materials relating to their work and to fellow speakers at the International Congress of Mathematicians.

Ergodic Theory of the Earthquake Flow” by Maryam Mirzakhani, published in International Mathematics Research Notices

Noted by the International Mathematical Union as work contributing to Mirzakhani’s achievement, this paper investigates the dynamics of the earthquake flow defined by Thurston on the bundle PMg of geodesic measured laminations.

Ergodic Theory of the Space of Measured Laminations” by Elon Lindenstrauss and Maryam Mirzakhani, published in International Mathematics Research Notices

A classification of locally finite invariant measures and orbit closure for the action of the mapping class group on the space of measured laminations on a surface.

Mass Forumlae for Extensions of Local Fields, and Conjectures on the Density of Number Field Discriminants” by Majul Bhargava, published in International Mathematics Research Notices

Manjul Bhargava joins Maryam Mirzakhani amongst this year’s winners of the Fields Medal. Here he uses Serre’s mass formula for totally ramified extensions to derive a mass formula that counts all étale algebra extentions of a local field F having a given degree n.

Model theory of operator algebras” by Ilijas Farah, Bradd Hart, and David Sherman, published in International Mathematics Research Notices

Several authors, some of whom speaking at the International Congress of Mathematicians, have considered whether the ultrapower and the relative commutant of a C*-algebra or II1 factor depend on the choice of the ultrafilter.

Small gaps between products of two primes” by D. A. Goldston, S. W. Graham, J. Pintz, and C. Y. Yildrim, published in Proceedings of the London Mathematical Society

Speaking on the subject at the International Congress, Dan Goldston and colleagues prove several results relating to the representation of numbers with exactly two prime factors by linear forms.

On Waring’s problem: some consequences of Golubeva’s method” by Trevor D. Wooley, published in the Journal of the London Mathematical Society

Wooley’s paper, as well as his talk at the congress, investigates sums of mixed powers involving two squares, two cubes, and various higher powers concentrating on situations inaccessible to the Hardy-Littlewood method.

 

Image credit: (1) Inner life of human mind and maths, © agsandrew, via iStock Photo. (2) Maryam Mirzakhani 2014. Photo by International Mathematical Union. Public Domain via Wikimedia Commons.

The post A Fields Medal reading list appeared first on OUPblog.

0 Comments on A Fields Medal reading list as of 8/18/2014 4:03:00 AM
Add a Comment
21. Special events and the dynamical statistics of Twitter

A large variety of complex systems in ecology, climate science, biomedicine, and engineering have been observed to exhibit so-called tipping points, where the dynamical state of the system abruptly changes. Typical examples are the rapid transition in lakes from clear to turbid conditions or the sudden extinction of species after a slightly change of environmental conditions. Data and models suggest that detectable warning signs may precede some, though clearly not all, of these drastic events. This view is also corroborated by recently developed abstract mathematical theory for systems, where processes evolve at different rates and are subject to internal and/or external stochastic perturbations.

One main idea to derive warning signs is to monitor the fluctuations of the dynamical process by calculating the variance of a suitable monitoring variable. When the tipping point is approached via a slowly-drifting parameter, the stabilizing effects of the system slowly diminish and the noisy fluctuations increase via certain well-defined scaling laws.

Based upon these observations, it is natural to ask, whether these scaling laws are also present in human social networks and can allow us to make predictions about future events. This is an exciting open problem, to which at present only highly speculative answers can be given. It is indeed to predict a priori unknown events in a social system. Therefore, as an initial step, we try to reduce the problem to a much simpler problem to understand whether the same mechanisms, which have been observed in the context of natural sciences and engineering, could also be present in sociological domains.

Courtesy of Christian Kuehn.
Courtesy of Christian Kuehn.

In our work, we provide a very first step towards tackling a substantially simpler question by focusing on a priori known events. We analyse a social media data set with a focus on classical variance and autocorrelation scaling law warning signs. In particular, we consider a few events, which are known to occur on a specific time of the year, e.g., Christmas, Halloween, and Thanksgiving. Then we consider time series of the frequency of Twitter hashtags related to the considered events a few weeks before the actual event, but excluding the event date itself and some time period before it.

Now suppose we do not know that a dramatic spike in the number of Twitter hashtags, such as #xmas or #thanksgiving, will occur on the actual event date. Are there signs of the same stochastic scaling laws observed in other dynamical systems visible some time before the event? The more fundamental question is: Are there similarities to known warning signs from other areas also present in social media data?

We answer this question affirmatively as we find that the a priori known events mentioned above are preceded by variance and autocorrelation growth (see Figure). Nevertheless, we are still very far from actually using social networks to predict the occurrence of many other drastic events. For example, it can also be shown that many spikes in Twitter activity are not predictable through variance and autocorrelation growth. Hence, a lot more research is needed to distinguish different dynamical processes that lead to large outburst of activity on social media.

The findings suggest that further investigations of dynamical processes in social media would be worthwhile. Currently, a main focus in the research on social networks lies on structural questions, such as: Who connects to whom? How many connections do we have on average? Who are the hubs in social media? However, if one takes dynamical processes on the network, as well as the changing dynamics of the network topology, into account, one may obtain a much clearer picture, how social systems compare and relate to classical problems in physics, chemistry, biology and engineering.

The post Special events and the dynamical statistics of Twitter appeared first on OUPblog.

0 Comments on Special events and the dynamical statistics of Twitter as of 8/27/2014 4:05:00 AM
Add a Comment
22. How cats land on their feet

By Ian Stewart Falling cats can turn over in mid-air. Well, most cats can. Our first cat, Seamus, didn’t have a clue. My wife, worried he might fall off a fence and hurt himself, tried to train him by holding him over a cushion and letting go. He enjoyed the game, but he never learned how to flip himself over.

0 Comments on How cats land on their feet as of 1/1/1900
Add a Comment
23. Mitchell discovers a comet

This Day in World History - Each evening that weather permitted, Maria (pronounced Mah-RYE-uh) Mitchell mounted the stairs to the roof of her family’s Nantucket home to sweep the sky with a telescope looking for a comet. Mitchell—who had been taught mathematics and astronomy by her father—began the practice in 1836. Eleven years later, on October 1, 1847, her long labors finally paid off. When she saw the comet, she quickly summoned her father, who agreed with her conclusion.

0 Comments on Mitchell discovers a comet as of 1/1/1900
Add a Comment
24. Sudoku and the Pace of Mathematics

By Jason Rosenhouse


Among mathematicians, it is always a happy moment when a long-standing problem is suddenly solved. The year 2012 started with such a moment, when an Irish mathematician named Gary McGuire announced a solution to the minimal-clue problem for Sudoku puzzles.

You have seen Sudoku puzzles, no doubt, since they are nowadays ubiquitous in newspapers and magazines. They look like this:

Your task is to fill in the vacant cells with the digits from 1-9 in such a way that each row, column and three by three block contains each digit exactly once. In a proper puzzle, the starting clues are such as to guarantee there is only one way of completing the square.

This particular puzzle has just seventeen starting clues. It had long been believed that seventeen was the minimum number for any proper puzzle. Mathematician Gordon Royle maintains an online database which currently contains close to fifty thousand puzzles with seventeen starting clues (in fact, the puzzle above is adapted from one of the puzzles in that list). However, despite extensive computer searching, no example of a puzzle with sixteen or fewer clues had ever been found.

The problem was that an exhaustive computer search seemed impossible. There were simply too many possibilities to consider. Even using the best modern hardware, and employing the most efficient search techniques known, hundreds of thousands of years would have been required.

Pure mathematics likewise provided little assistance. It is easy to see that seven clues must be insufficient. With seven starting clues there would be at least two digits that were not represented at the start of the puzzle. To be concrete, let us say that there were no 1s or 2s in the starting grid. Then, in any completion of the starting grid it would be possible simply to change all the 1s to 2s, and all the 2s to 1s, to produce a second valid solution to the puzzle. After making this observation, however, it is already unclear how to continue. Even a simple argument proving the insufficiency of eight clues has proven elusive.

McGuire’s solution requires a combination of mathematics and computer science. To reduce the time required for an exhaustive search he employed the idea of an “unavoidable set.” Consider the shaded cells in this Sudoku square:

Now imagine a starting puzzle having this square for a solution. Can you see why we would need to have at least one starting clue in one of those shaded cells? The reason is that if we did not, then we would be able to toggle the digits in those cells to produce a second solution to the same puzzle. In fact, this particular Sudoku square has a lot of similar unavoidable sets; in general some squares will have more than others, and of different types. Part of McGuire’s solution involved finding a large collection of certain types of unavoidable sets in every Sudoku square under consideration.

Finding these unavoidable sets permits a dramatic reduction in the size of the space that must be searched. Rather than searching through every sixteen-clue subset of a given Sudoku square, desperately looking for one that is actually a proper puzzle, we need only consider sets of sixteen starting clues containing at l

0 Comments on Sudoku and the Pace of Mathematics as of 1/1/1900
Add a Comment
25. Haunted Happenings

Halloween has always been a fun time of year for me. I love dressing up in costume. It's very much like creating the characters in my stories, only in costume I become a character for real. In fact, I bring some costume pieces along with me when I do school visits and help the students devise new and interesting characters.

So today's post is a collection of interesting Halloween(ish) news I've unearthed of late.

Of course, you know I love libraries, so how cool is a haunted one? That's right, in Deep River, Connecticut, the public library (a former home built in 1881 by a local businessman) has not just one ghost but many. Wouldn't that make for some interesting storytimes?

The American Library Association's GREAT WEBSITES FOR KIDS isn't too scary, but there are a frightfully wonderful number of cool places to visit there. Take for example this website on BATS--the kind that fly in the night. That's kind of spooky.

Or try National Geographic's CAT site. Have you ever seen a cat skeleton?

So I admit, Math was always a little scary for me. That's why I've included this site here called COOL MATH--An Amusement Park of Math and More. Check it out for puzzles, games, and Bubba Man in his awesome Halloween costume.

If all these Halloween antics make you hungry, stop by the For Kids section here on my site and find the recipe for SPIDER SNACKS. Then you can munch along as you do the HALLOWEEN CROSSWORD, lurking just around the corner.

Happy Hauntings!


0 Comments on Haunted Happenings as of 1/1/1900
Add a Comment

View Next 19 Posts