JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Viewing: Blog Posts Tagged with: Technology, Most Recent at Top [Help]
Results 1 - 25 of 961
How to use this Page
You are viewing the most recent posts tagged with the words: Technology in the JacketFlap blog reader. What is a tag? Think of a tag as a keyword or category label. Tags can both help you find posts on JacketFlap.com as well as provide an easy way for you to "remember" and classify posts for later recall. Try adding a tag yourself by clicking "Add a tag" below a post's header. Scroll down through the list of Recent Posts in the left column and click on a post title that sounds interesting. You can view all posts from a specific blog by clicking the Blog name in the right column, or you can click a 'More Posts from this Blog' link in any individual post.
3D Systems, in collaboration with YALSA, is committed to expanding young people’s access to 21st century tools like 3D design, 3D scanning and 3D printing. The MakerLab Club is a brand new community of thousands of U.S. libraries and museums committed to advancing 3D digital literacy via dedicated equipment, staff training and increased public access.
3D Systems is donating up to 4,000 new 3D printers to libraries and museums across the country who join the MakerLab Club and provide access to 3D printing and design programs and services for their communities. Libraries can apply to be part of the MakerLab Club via an online application. now until November 17th, 2014. Donated printers will be allocated on a competitive basis.
ELIGIBILITY AND MEMBERSHIP REQUIREMENTS
Membership in the MakerLab Club is available to libraries committed to creating or expanding makerlabs and/or making activities and to providing community access to 3D printers and digital design.
MAKER LAB CLUB BENEFITS
Libraries can receive up to four donated Cube 3D printers, as well as regular access to workshop curricula and content via webinars. Libraries will also receive exclusive equipment discounts and opportunities to win free hardware and software. In addition to resources and training library staff can join and participate in communities of practice in order to exchange ideas and best practices.
LEARN MORE ABOUT MAKING
Learn more about making in libraries via the resources on YALSA’s wiki, including a free webinar and downloadable toolkit. And be sure to mark your calendar for March 8 – 14, 2015 when we celebrate Teen Tech Week with the theme “Libraries are for Making ____________.”
Today, 60 years ago, the visionary convention establishing the European Organization for Nuclear Research – better known with its French acronym, CERN – entered into force, marking the beginning of an extraordinary scientific adventure that has profoundly changed science, technology, and society, and that is still far from over.
With other pan-European institutions established in the late 1940s and early 1950s — like the Council of Europe and the European Coal and Steel Community — CERN shared the same founding goal: to coordinate the efforts of European countries after the devastating losses and large-scale destruction of World War II. Europe had in particular lost its scientific and intellectual leadership, and many scientists had fled to other countries. Time had come for European researchers to join forces towards creating of a world-leading laboratory for fundamental science.
Sixty years after its foundation, CERN is today the largest scientific laboratory in the world, with more than 2000 staff members and many more temporary visitors and fellows. It hosts the most powerful particle accelerator ever built. It also hosts exhibitions, lectures, shows, meetings, and debates, providing a forum of discussion where science meets industry and society.
What has happened in these six decades of scientific research? As a physicist, I should probably first mention the many ground-breaking discoveries in Particle Physics, such as the discovery of some of the most fundamental building block of matter, like the W and Z bosons in 1983; the measurement of the number of neutrino families at LEP in 1989; and of course the recent discovery of the Higgs boson in 2012, which prompted the Nobel Prize in Physics to Peter Higgs and Francois Englert in 2013.
But looking back at the glorious history of this laboratory, much more comes to mind: the development of technologies that found medical applications such as PET scans; computer science applications such as globally distributed computing, that finds application in many fields ranging from genetic mapping to economic modeling; and the World Wide Web, that was developed at CERN as a network to connect universities and research laboratories.
If you’ve ever asked yourself what such a laboratory may look like, especially if you plan to visit it in the future and expect to see building with a distinctive sleek, high-tech look, let me warn you that the first impression may be slightly disappointing. When I first visited CERN, I couldn’t help noticing the old buildings, dusty corridors, and the overall rather grimy look of the section hosting the theory institute. But it was when an elevator brought me down to visit the accelerator that I realized what was actually happening there, as I witnessed the colossal size of the detectors, and the incredible degree of sophistication of the technology used. ATLAS, for instance, is a 25 meters high, 25 meters wide and 45 meters long detector, and it weighs about 7,000 tons!
The 27-km long Large Hadron Collider is currently shut down for planned upgrades. When new beams of protons will be circulated in it at the end of 2014, it will be at almost twice the energy reached in the previous run. There will be about 2800 bunches of protons in its orbit, each containing several hundred billion protons, separated by – as in a car race, the distance between bunches can be expressed in units of time – 250 billionths of a second. The energy of each proton will be compared to that of a flying mosquito, but concentrated in a single elementary particle. And the energy of an entire bunch of protons will be comparable to that of a medium-sized car launched at highway speed.
Why these high energies? Einstein’s E=mc2 tells us that energy can be converted to mass, so by colliding two protons with very high energy, we can in principle produce very heavy particles, possibly new particles that we have never before observed. You may wonder why we would expect that such new particles exist. After all we have already successfully created Higgs bosons through very high-energy collisions, what can we expect to find beyond that? Well, that’s where the story becomes exciting.
Some of the best motivated theories currently under scrutiny in the scientific community – such as Supersymmetry – predict that not only should new particles exist, but they could explain one of the greatest mysteries in Cosmology: the presence of large amounts of unseen matter in the Universe, which seem to dominate the dynamics of all structures in the Universe, including our own Milky Way galaxy — Dark Matter.
Identifying in our accelerators the substance that permeates the Universe and shapes its structure would represent an important step forward in our quest to understand the Cosmos, and our place in it. CERN, 60 years and still going strong, is rising up to challenge.
Headline image credit: An example of simulated data modeled for the CMS particle detector on the Large Hadron Collider (LHC) at CERN. Image by Lucas Taylor, CERN. CC BY-SA 3.0 via Wikimedia Commons.
2014 marks not just the centenary of the start of World War I, and the 75th anniversary of World War II, but on 29 September it is 60 years since the establishment of CERN, the European Centre for Nuclear Research or, in its modern form, Particle Physics. Less than a decade after European nations had been fighting one another in a terrible war, 12 of those nations had united in science. Today, CERN is a world laboratory, famed for having been the home of the world wide web, brainchild of then CERN scientist Tim Berners-Lee; of several Nobel Prizes for physics, although not (yet) for Peace; and most recently, for the discovery of the Higgs Boson. The origin of CERN, and its political significance, are perhaps no less remarkable than its justly celebrated status as the greatest laboratory of scientific endeavour in history.
Its life has spanned a remarkable period in scientific culture. The paradigm shifts in our understanding of the fundamental particles and the forces that control the cosmos, which have occurred since 1950, are in no small measure thanks to CERN.
In 1954, the hoped for simplicity in matter, where the electron and neutrino partner a neutron and proton, had been lost. Novel relatives of the proton were proliferating. Then, exactly 50 years ago, the theoretical concept of the quark was born, which explains the multitude as bound states of groups of quarks. By 1970 the existence of this new layer of reality had been confirmed, by experiments at Stanford, California, and at CERN.
During the 1970s our understanding of quarks and the strong force developed. On the one hand this was thanks to theory, but also due to experiments at CERN’s Intersecting Storage Rings: the ISR. Head on collisions between counter-rotating beams of protons produced sprays of particles, which instead of flying in all directions, tended to emerge in sharp jets. The properties of these jets confirmed the predictions of quantum chromodynamics – QCD – the theory that the strong force arises from the interactions among the fundamental quarks and gluons.
CERN had begun in 1954 with a proton synchrotron, a circular accelerator with a circumference of about 600 metres, which was vast at the time, although trifling by modern standards. This was superseded by a super-proton synchrotron, or SPS, some 7 kilometres in circumference. This fired beams of protons and other particles at static targets, its precision measurements building confidence in the QCD theory and also in the theory of the weak force – QFD, quantum flavourdynamics.
QFD brought the electromagnetic and weak forces into a single framework. This first step towards a possible unification of all forces implied the existence of W and Z bosons, analogues of the photon. Unlike the massless photon, however, the W and Z were predicted to be very massive, some 80 to 90 times more than a proton or neutron, and hence beyond reach of experiments at that time. This changed when the SPS was converted into a collider of protons and anti-protons. By 1984 experiments at the novel accelerator had discovered the W and Z bosons, in line with what QFD predicted. This led to Nobel Prizes for Carlo Rubbia and Simon van der Meer, in 1984.
The confirmation of QCD and QFD led to a marked change in particle physics. Where hitherto it had sought the basic templates of matter, from the 1980s it turned increasingly to understanding how matter emerged from the Big Bang. For CERN’s very high-energy experiments replicate conditions that were prevalent in the hot early universe, and theory implies that the behaviour of the forces and particles in such circumstances is less complex than at the relatively cool conditions of daily experience. Thus began a period of high-energy particle physics as experimental cosmology.
This raced ahead during the 1990s with LEP – the Large Electron Positron collider, a 27 kilometre ring of magnets underground, which looped from CERN towards Lake Geneva, beneath the airport and back to CERN, via the foothills of the Jura Mountains. Initially designed to produce tens of millions of Z bosons, in order to test QFD and QCD to high precision, by 2000 its performance was able to produce pairs of W bosons. The precision was such that small deviations were found between these measurements and what theory implied for the properties of these particles.
The explanation involved two particles, whose subsequent discoveries have closed a chapter in physics. These are the top quark, and the Higgs Boson.
As gaps in Mendeleev’s periodic table of the elements in the 19th century had identified new elements, so at the end of the 20th century a gap in the emerging pattern of particles was discerned. To complete the menu required a top quark.
The precision measurements at LEP could be explained if the top quark exists, too massive for LEP to produce directly, but nonetheless able to disturb the measurements of other quantities at LEP courtesy of quantum theory. Theory and data would agree if the top quark mass were nearly two hundred times that of a proton. The top quark was discovered at Fermilab in the USA in 1995, its mass as required by the LEP data from CERN.
As the 21st century dawned, all the pieces of the “Standard Model” of particles and forces were in place, but one. The theories worked well, but we had no explanation of why the various particles have their menu of masses, or even why they have mass at all. Adding mass into the equations by hand is like a band-aid, capable of allowing computations that agree with data to remarkable precision. However, we can imagine circumstances, where particles collide at energies far beyond those accessible today, where the theories would predict nonsense — infinity as the answer for quantities that are finite, for example. A mathematical solution to this impasse had been discovered fifty years ago, and implied that there is a further massive particle, known as the Higgs Boson, after Peter Higgs who, alone of the independent discoveries of the concept, drew attention to some crucial experimental implications of the boson.
Discovery of the Higgs Boson at CERN in 2012 following the conversion of LEP into the LHC – Large Hadron Collider – is the climax of CERN’s first 60 years. It led to the Nobel Prize for Higgs and Francois Englert, theorists whose ideas initiated the quest. Many wondered whether the Nobel Foundation would break new ground and award the physics prize to a laboratory, CERN, for enabling the experimental discovery, but this did not happen.
CERN has been associated with other Nobel Prizes in Physics, such as to Georges Charpak, for his innovative work developing methods of detecting radiation and particles, which are used not just at CERN but in industry and hospitals. CERN’s reach has been remarkable. From a vision that helped unite Europe, through science, we have seen it breach the Cold War, with collaborations in the 1960s onwards with JINR, the Warsaw Pact’s scientific analogue, and today CERN has become truly a physics laboratory for the world.
In many ways, it is difficult to believe that my library has been circulating iPads in the children’s library for almost three years. Despite the continued discourse on the role of tablet technology among pre-readers, there is no question that children and families have continued to integrate such devices into their lives. In our community parents look to the librarians to guide their app selections, or at least point them towards the resources that can assist in discerning which purchases to make.
Kids grasping a new Tween Tab from the library. Photo courtesy of Jacquie Miller.
It first began with the preschool set, as our library made the decision to circulate Early Literacy Kits. The idea was to infuse tech into what we as children’s librarians were already expounding to parents and caregivers about the practices they can master in order to cultivate a reader. Now after so many parents have expressed that their child has already “Spot the Red Dot,” so to speak, what’s next?
After the debut of our nonfiction reorganization we began thinking of ways to market the collection glades and showcase other resources that reflected these areas. Time and time again we kept hearing requests to circulate tablets for school-age children. Some of our initial concerns stemmed from the prevalence in the community of iPads in the home. Was there even really a need with most families already owning at least one device? We realized that this wasn’t the case for all families, and if anything a lot of the feedback has been that patrons look to our curated list of apps as the main draw. With this encouragement, why not mirror some of the same subjects we wanted to point kids to in redefining how we navigate nonfiction? Our focus would be to highlight apps that inform, engage, and are used to create. Looking to a previous attempt at providing tweens with circulating devices that dissolved, we knew that we could give our Tween Tabs new life.
The method of circulating, updating, and restricting the devices would match the process of the early literacy tablets. Due to other initiatives we wouldn’t be able to roll out six iPads at once, but decided to test-drive the service with two tablets. Instead of the 5 Early Literacy Practices, we would be dividing our apps into the new nonfiction glades: Facts, Traditions, Create, Sports, Self, Fun, Animals, STEM, Then & Now, adding Bio and Languages to round it out. Compiling some of our favorites from the past few years, which we have demoed in programs and sometimes spontaneously in the stacks, the total list included over 70 apps.
This summer at the Fayetteville Free Library in Fayetteville, NY we piloted our first ever week long summer camp during Summer Reading. The Fayetteville Free Library Geek Girl Camp is a camp for girls in grades 3 through 5 introducing them to hands on STEM skills and to female role models. Months of work went into planning this camp fulfilling a need in our greater community. According to the Girl Scout Research Institute, “Research shows that girls start losing interest in math and science during middle school. Girls are typically more interested in careers where they can help others (e.g., teaching, child care, working with animals) and make the world a better place. Recent surveys have shown that girls and young women are much less interested than boys and young men in math and science.”
We had 44 girls attend the FFL Geek Girl Camp from all over the greater Syracuse, NY area. We had over 10 girls on the waiting list and charged $25.00 for the camp to supplement the cost of food, t-shirts and supplies. We also offered four scholarship opportunities for those who might not be able to afford the cost of the camp. In addition to the 44 girls who came to the camp we had 9 speakers from across the country join us in person or via Skype. Speakers included students from Virginia Commonwealth University, Cornell University, Syracuse University and SUNY College of Environmental Science and Forestry. Other speakers included women who worked for Facebook, the Air Force, a pharmaceutical research facility, and from national organizations, Girls in Tech and Girl Develop IT. Each day we heard from one or more speakers who talked about what they do at their jobs or in school and how important it is to have women working in these fields! They all made sure to relate to the girls in attendance and campers had great questions afterwards.
Throughout the week we had a great array of activities. We rented a cement mixer and made an oobleck pool for kids to run across after learning about density and viscosity, shot off model rockets, chucked books, apples and water balloons with a trebuchet after learning about projectiles, force, gravity and more. Girls learned about fractals, made mini catapults, 3D printed, used littlebits kits, Snap Circuits and computer programmed with Scratch and much more.
The camp was a huge success that the parents of those who attended were above and beyond appreciative and wanted to already sign up for next year. We learned from this particular camp that we created something valuable for our community and that we need to transition into this camp model for future Summer Reading programs. We were asked, “When are you having a camp for boys”? We will not only have camp for boys and girls but of different ages as well. Planning FFL Geek Girl Camp did take a lot of time; however the outcome of the camp was far beyond what we expected and worth the time spent planning for the impact it had on our community. Camps offer children an opportunity to learn more and make stronger relationships over a short period of time. Like camp as a kid it was a place to learn new things and meet new friends and create memories that last a lifetime.
The first day of FFL Geek Girl, the campers were a little shy but after just the second day the girls couldn’t stop talking and working together. We run bimonthly programs where kids come in every other week to work on projects but having children in the library everyday for a week gives you an opportunity to teach kids a skill and not have to worry about rushing or not being able to complete the task, plus you have an opportunity to do projects or lessons that take longer and are more complex. Camps also give us a great opportunity to get to know our patrons. Girls come in and out of the library now looking for their camp counselors to say hi! Cost is also a huge factor in running a camp at a library versus a different venue. We had materials donated to the camp and used many of the resources we already owned including our own staff to run and plan the program. Most science camps can range in price anywhere from $75-$600. We decided that $25 was not only affordable but fit into our budget for the camp as well to make it run successfully.
We think that camps are the future of Summer Reading. It gives us and the community an opportunity to focus on important topics like STEAM and produce content that is beneficial and influential. At the end of the week our campers said they wanted to be inventors, work at Google, become web developers and physicists. If it wasn’t for the atmosphere we created at the library and the week long camp we would have never saw these results and impact on our community.
Please check out our website for more information about the FFL Geek Girl Camp, our Flickr page and hashtag #geekgirl14 on Twitter and Instagram.
Meredith Levine is the Director of Family Engagement at the Fayetteville Free Library. Meredith is a member of the ALSC School Age Programs and Services Committee. Find out more at www.fflib.org or email Meredith at firstname.lastname@example.org
Public libraries are, as ALA President Courtney Young said in a July 2014 Comcast Newsmaker interview, “digital learning centers.” We are able to provide access to computers, wireless capabilities, and also a space to learn. Access to technology becomes even more important to our “at-risk” teens; the library becomes a safe spot to use these resources. The question becomes how do we help them use this technology and learn from it? Earlier this month, the Stanford Center for Opportunity Policy in Education (SCOPE) published a report titled “Using Technology to Support At-Risk Students’ Learning.” This brief defines “at-risk” students as high schoolers with personal and academic factors that would could cause them to fail classes or drop out of school all together. They give three variables for success, real-life examples to why these variables work, and then recommend policies to help achieve these variables. While the article was geared towards schools, these variables are important to keep in mind as we work with the teens in our libraries.
When learning new digital skills, youth must be engaged in interactive projects, must do more discovery and creation than the standard “drill and kill,” and must have a blend of both teacher and technology (6). These variables are part of the larger, digital learning ecosystem which places the learner at the center. This ecosystem relies on the constant bi-directional dialogue as the learner engages with learning outcomes, technology, and the context of the situation (which includes the activity, the goals of the activity, and the community the learning is taking place in). As we use technology and support our teens, we should be in constant reflection mode, altering our future programs to best fit the needs of our teens. Feedback we receive can help us discover what we are doing well and what needs to still be worked on. How we shape our digital literacy programs are up to us; we know our community of teens better than anyone else in the library. If we highlight and support their interests, they are most likely to be engaged with the program and more likely to return the library and use our resources.
These variables overlap and are more powerful when used together. The authors cite that interactive learning allows “students to see and explore concepts from different angles using a variety of representations” (7). As the teen engage, they are likely to discuss their findings with the people around them, which in turn strengthens both the learning and the existing community. As we work with our teens, we should push for creation versus just going through the steps, because this form of interactive learning this strengthens retention of skills and again, creates conversation. As we implement this programming, we can also be resources and a support team for our teens. It is important to stress that we don’t have to be the experts, and there might be times where we are all learning together. The moments of collective learning enhances our community and creates shared memories the teens won’t forget. Looking at the big picture, by keeping these variables in mind, we can empower our teens through access to technology they might not have regular access to.
To me, these variables seem obvious and are important to keep in mind as we think about creating programming that target digital literacy skills. This might also be because of the assistantship I am a part of at the University of Illinois at Urbana Champaign. Our nine month grant from the Department of Commerce and Economic Opportunity focuses on eliminating the digital divide across the Urbana-Champaign community. I am working with two after-school programs and am developing curriculum to support digital literacy.As we think about this article and our own libraries, this can be our framing question: How can we support teens’ digital literacy with the resources our library has? These variables also push us to provide more than just access to our teens. While access is important, this article reminds us that thoughtful programming can engage our teens, help them become a stronger part of our library community, and grow as an informed global citizen. We can help them create content they can share with the world and empower them to use technology as a tool to better themselves. Over the following months, I’ll be creating digital literacy programs and will be keeping these variables from the SCOPE article in mind. I cannot wait to share my discoveries with you and hope some of what I learn and create can be used with the teens you serve.
Apps are all the rage nowadays, including apps to help fight rage. That’s right, the iTunes app store contains several dozen apps designed to manage anger or reduce stress. Smartphones have become such a prevalent component of everyday life, it’s no surprise that a demand has risen for phone programs (also known as apps) that help us manage some of life’s most important elements, including personal health. But do these programs improve our ability to manage our health? Do health apps really matter?
Early apps for patients with diabetes demonstrate how a proposed app idea can sound useful in theory but provide limited tangible health benefits in practice. First generation diabetes apps worked like a digital notebook, in which apps linked with blood glucose monitors to record and catalog measured glucose levels. Although doctors and patients were initially charmed by high tech appeal and app convenience, the charm wore off as app use failed to improve patient glucose monitoring habits or medication compliance.
Fitness apps are another example of rough starts among early health app attempts. Initial running apps served as an electronic pedometer, recording the number of steps and/or the total distance ran. These apps again provided a useful convenience over using a conventional pedometer, but were unlikely to lead to increased exercise levels or appeal to individuals who didn’t already run. Apps for other health related topics such as nutrition, diet, and air pollution ran into similar limitations in improving healthy habits. For a while, it seemed as if the initial excitement among the life sciences community for e-health simply couldn’t be translated to tangible health benefits among target populations.
Luckily, recent changes in app development ideology have led to noticeable increases in health app impacts. Health app developers are now focused on providing useful tools, rather than collections of information, to app users. The diabetes app ManageBGL.com, for example, predicts when a patient may develop hypoglycemia (low blood sugar levels) before the visual/physical signs and adverse effects of hypoglycemia occur. The running app RunKeeper connects to other friend’s running profiles to share information, provide suggested running routes, and encourage runners to speed up or slow down for reaching a target pace. Air pollution apps let users set customized warning levels, and then predict and warn users when they’re heading towards an area with air pollution that exceeds warning levels. Health apps are progressing beyond providing mere convenience towards a state where they can help the user make informed decisions or perform actions that positively affect and/or protect personal health.
So, do health apps really matter? It’s unlikely that the next generation of health apps will have the same popularity as Facebook or widespread utility such as Google maps. The impact, utility, and popularity of health apps, however, are increasing at a noticeable rate. As health app developers continue to better their understanding of health app strengths and limitations and upcoming technologies that can improve health apps such as miniaturized sensors and smartglass become available, the importance of health related apps and proportion of the general public interested in health apps are only going to get larger.
A brief look at ‘grams of interest to engage teens and librarians navigating this social media platform. This week we explore “theme of the day” posts, contests, and good old #libraryshelfies.
Have you come across a related Instagram post this week, or has your library posted something similar? Have a topic you’d like to see in the next installment of Instagram of the Week? Share it in the comments section of this post.
Have you ever thought that your body movements can be transformed into learning stimuli and help to deal with abstract concepts? Subjects in natural science contain plenty of abstract concepts which are difficult to understand through reading-based materials, in particular for younger learners who are at the stage of developing their cognitive ability. For example, elementary school students would find it hard to distinguish the differences in similar concepts of fundamental optics such as concave lens imaging versus convex lens imaging. By performing a simulated exercise in person, learners can comprehend concepts easily because of the content-related actions involved during the process of learning natural science.
As far as commonly adopted virtual simulations of natural science experiments are concerned, the learning approach with keyboard and mouse lacks a comprehensive design. To make the learning design more comprehensive, we suggested that learners be provided with a holistic learning context based on embodied cognition, which views mental simulations in the brain, bodily states, environment, and situated actions as integral parts of cognition. In light of recent development in learning technologies, motion-sensing devices have the potential to be incorporated into a learning-by-doing activity for enhancing the learning of abstract concepts.
When younger learners study natural science, their body movements with external perceptions can positively contribute to knowledge construction during the period of performing simulated exercises. The way of using keyboard/mouse for simulated exercises is capable of conveying procedural information to learners. However, it only reproduces physical experimental procedures on a computer. For example, when younger learners use conventional controllers to perform fundamental optics simulation exercises, they might not benefit from such controller-based interaction due to the routine-like operations. If environmental factors, namely bodily states and situated actions, were well-designed as external information, the additional input can further help learners to better grasp the concepts through meaningful and educational body participation.
Based on the aforementioned idea, we designed an embodiment-based learning strategy to help younger learners perform optics simulation exercises and learn fundamental optics better. With this learning strategy enabled by the motion-sensing technologies, younger learners can interact with digital learning content directly through their gestures. Instead of routine-like operations, the gestures are designed as content-related actions for performing optics simulation exercises. Younger learners can then construct fundamental optics knowledge in a holistic learning context.
One of the learning goals is to acquire knowledge. Therefore, we created a quasi-experiment to evaluate the embodiment-based learning strategy by comparing the leaning performance of the embodiment-based learning group with that of the keyboard-mouse learning group. The result shows that the embodiment-based learning group significantly outperformed the keyboard-mouse learning group. Further analysis shows that no significant difference of cognitive load was found between these two groups although applying new technologies in learning could increase the consumption of learners’ cognitive resources. As it turned out, the embodiment-based learning strategy is an effective learning design to help younger learners comprehend abstract concepts of fundamental optics.
For natural science learning, the learning content and the process of physically experimenting are both important for learners’ cognition and thinking. The operational process conveys implicit knowledge regarding how something works to learners. In the experiments of lens imaging, the position of virtual light source and the type of virtual lens can help learners determine the attributes of the virtual image. By synchronizing gestures with virtual light source, a learner not only concentrates on the simulated experimental process but realizes the details of the external perception. Accordingly, learners can further understand how movements of the virtual light source and the types of virtual lens change the virtual image and learn the knowledge of fundamental optics better.
Our body movements have the potential to improve our learning if adequate learning strategies and designs are applied. Although motion-sensing technologies are now available to the general public, massive applications will depend on economical price and evidence-based approaches recommended for the educational purposes. The embodiment-based design has launched a new direction and is hoped to continuously shed light on improving our future learning.
“Pretend the window is a screen,” said poet Susan Blackaby at this morning’s #alsc14 session “The Poetry of Science.” People spend so much time with their eyes glued to their electronic devices that they’re liable to miss what’s going on in their environment. Imagine if people gave as much concentration to nature as they give to their computer screens. How many hawks would they see? What other wonders would they encounter?
Author Margarita Engle joined today’s panel, discussing how she uses both poetry and her science background to advocate for animal and environment conservation. As a child, Engle said, “No curiosity was too small for concentration.” She made the point that the phrase “the spirit of wonder” is applicable to both science and poetry. Because of this commonality, it’s possible to interest poetry loving kids in science phenomena and give science fans the chance to experiment with language.
Poet Janet Wong said that it’s easy–and vital–to create science literacy moments in the classroom and at the library. The key is to be bold. “Science and technology are accessible to people if they’re not afraid.” As gatekeepers of information, teachers and librarians should embrace the responsibility to expose kids to all subjects. Linking language and science may be a key way to make science more approachable. It doesn’t even have to be an elaborate lesson: just a few science literacy moments a week will have a lasting impact on children’s lives.
A short list of tweets from the past week of interest to teens and the library staff that work with them.
Do you have a favorite Tweet from the past week? If so add it in the comments for this post. Or, if you read a Twitter post between September 19 – September 15 that you think is a must for the next Tweets of the Week send a direct or @ message to lbraun2000 on Twitter.
World Water Monitoring Day is an annual celebration reaching out to the global community to build awareness and increase involvement in the protection of water resources around the world. The hope is that individuals will feel motivated and empowered to investigate basic water monitoring in their local area. Championed by the Water Environment Federation, a broader challenge has arisen out of the awareness day, celebrated on September 18th each year. Simple water testing kits are available, and individuals are encouraged to go out and test the quality of local waterways.
Water monitoring can refer to anything from the suitability for drinking from a particular water source, to taking more responsibility for our own consumption of water as an energy source, to the technology needed for alternative energies. Discover more about water issues from around the world using the map below.
Image credit: Ocean beach at low tide against the sun, by Brocken Inaglory. CC-BY-3.0 via Wikimedia Commons.
Innovation is a primary driver of economic growth and of the rise in living standards, and a substantial body of research has been devoted to documenting the welfare benefits from it (an example being Trajtenberg’s 1989 study). Few areas have experienced more rapid innovation than the Personal Computers (PC) industry, with much of this progress being associated with a particular component, the Central Processing Unit (CPU). The past few decades had seen a consistent process of CPU innovation, in line with Moore’s Law: the observation that the number of transistors on an integrated circuit doubles every 18-24 months (see figure below). This remarkable innovation process has clearly benefitted society in many, profound ways.
A notable feature of this innovation process is that a new PC is often considered “obsolete” within a very short period of time, leading to the rapid elimination of non-frontier products from the shelf. This happens despite the heterogeneity of PC consumers: while some (e.g., engineers or gamers) have a high willingness-to-pay for cutting edge PCs, many consumers perform only basic computing tasks, such as word processing and Web browsing, that require modest computing power. A PC that used to be on the shelf, say, three years ago, would still adequately perform such basic tasks today. The fact that such PCs are no longer available (except via a secondary market for used PCs which remains largely undeveloped) raises a natural question: is there something inefficient about the massive elimination of products that can still meet the needs of large masses of consumers?
Consider, for example, a consumer whose currently-owned, four-year old laptop PC must be replaced since it was severely damaged. Suppose that this consumer has modest computing-power needs, and would have been perfectly happy to keep using the old laptop, had it remained functional. This consumer cannot purchase the old model since it has long vanished from the shelf. Instead, she must purchase a new laptop model, and pay for much more computing power than she actually needs. Could it be, then, that some consumers are actually hurt by innovation?
A natural response to this concern might be that the elimination of older PC models from the shelves likely indicates that demand for them is low. After all, if we believe in markets, we may think that high levels of demand for something would provide ample incentives for firms to offer it. This intuition, however, is problematic: as shown in seminal theoretical work by Nobel Prize laureate Michael Spence, the set of products offered in an oligopoly equilibrium need not be efficient due to the misalignment of private and social incentives. The possibility that yesterday’s PCs vanish from the shelf “too fast” cannot, therefore, be ruled out by economic theory alone, motivating empirical research.
A recent article addresses this question by applying a retrospective analysis of the U.S. Home Personal Computer market during the years 2001-2004. Data analysis is used to explore the nature of consumers’ demand for PCs, and firms’ incentives to offer different types of products. Product obsolescence is found to be a real issue: the average household’s willingness-to-pay for a given PC model is estimated to drop by 257 $US as the model ages by one year. Nonetheless, substantial heterogeneity is detected: some consumers’ valuation of a PC drops at a much faster rate, while from the perspective of other consumers, PCs becomes “obsolete” at a much lower pace.
The paper focuses on a leading innovation: Intel’s introduction of its Pentium M® chip, widely considered as a landmark in mobile computing. This innovation is found to have crowded out laptops based on older Intel technologies, such as the Pentium III® and Pentium 4®. It is also found to have made a substantial contribution to the aggregate consumer surplus, boosting it by 3.2%- 6.3%.
These substantial aggregatebenefits were, however, far from being uniform across different consumer types: the bulk of the benefits were enjoyed by the 20% least price-sensitive households, while the benefits to the remaining 80% were small and sometimes negligible. The analysis also shows that the benefits from innovation could have “trickled down” to the masses of price-sensitive households, had the older laptop models been allowed to remain on the shelf, alongside the cutting-edge ones. This would have happened since the presence of the new models would have exerted a downward pressure on the prices of older models. In the market equilibrium, this channel is shut down, since the older laptops promptly disappear.
Importantly, while the analysis shows that some consumers benefit from innovation much more than others, no consumers were found to be actually hurt by it. Moreover, the elimination of the older laptops was not found to be inefficient: the social benefits from keeping such laptops on the shelf would have been largely offset by fixed supplier costs.
So what do we make of this analysis? The main takeaway is that one has to go beyond aggregate benefits and consider the heterogeneous effects of innovation on different consumer types, and the possibility that rapid elimination of basic configurations prevents the benefits from trickling down to price-sensitive consumers. Just the same, the paper’s analysis is constrained by its focus on short-run benefits. In particular, it misses certain long-term benefits from innovation, such as complementary innovations in software that are likely to trickle down to all consumer types. Additional research is, therefore, needed in order to fully appreciate the dramatic contribution of innovation in personal computing to economic growth and welfare.
More links and pins are coming to the ALSC and Día Pinterest accounts!
Photo by Katie Salo
In an effort to increase the material pinned to the Pinterest account, all ALSC committees will have the opportunity to maintain their own boards and content. ALSC committees will then be able to share relevant blog posts, links, and resources that relate to their committee’s work and charge. Committee chairs that are interested in using social media should contact Amy Koester, chair of Public Awareness Committee at amy(dot)e(dot)koester(at)gmail(dot)com.
ALSC’s Public Awareness Committee will continue to maintain the Día page, but with more regularly pinned content. Look for new ideas and inspiration to bring your Día programming up to the next level.
We’re looking forward to the changes that will be taking place and hope that members will find loads of useful information about the work that ALSC is doing! If you have any suggestions for boards or pins that should be on the ALSC Pinterest board, please feel free to leave those in the comments.
Katie Salo is an Early Literacy Librarian at Indian Prairie Public Library in Darien, IL and is writing this post for the Public Awareness Committee. You can reach her at simplykatie(at)gmail(dot)com.
App-advisory can be intimidating, especially for those of us who are not heavily engaged in touch-screen technology in our personal lives. Although I am excited to be a new member of the Children and Technology Committee, and this is a professional interest of mine, I must confess: I don’t own a smartphone or a tablet. But I strongly believe that whatever your personal habits or philosophies, as professionals, we need to be willing and able (and enthusiastic!) to be media mentors, modeling responsible new media use and providing recommendations for parents and families. With so many apps out there, many of which are labeled “educational,” we need to be able to provide parents with trusted recommendations and advice. If you can do reader’s advisory, you already have the skills to do app advisory! Here are some suggestions, based on what we did at the Wellesley Free Library.
Get to know your material! Read app reviews (see list of review sources below) and keep track of the apps about which you read. We use a Google spreadsheet, so that all Children’s Department staff can contribute. This includes, when available, recommended age (though this is something significantly lacking in many app reviews), price, platform, categories, and our comments. Keeping this information centralized and organized makes it easy to come up with specific apps to recommend to a patron, or to pull for a list.
Play around with the apps! If you have money to spend (consider asking your Friends group for money for apps, especially if you will be using the apps in library programs), download some apps that seem interesting and try them out. Even if you can’t spend money, you can try out free apps or download free “lite” versions of apps. Playing with the app allows you to give a more in-depth description and detailed information in your advisory (consider the difference between recommending a book based on a review you read and having read the book itself).
Choose your method of advisory. App advisory can take many forms. There is the individual recommendation at the reference desk, there are app-chats (the app version of the book-talk), which have been discussed in an article on the ALSC blog by Liz Fraser, and then there are app-lists. For the past year, we have created monthly themed app lists, mostly for young children between the ages of 2 and 6. The themes have included: interactive books, music, math, letters, and more. Be sure to include free apps as well as apps available for non-Apple devices on your lists.
Provide advice, along with recommendations. On the back of our paper app lists, and on the website where we post links to the app-list Pinterest boards, we offer advice to parents about using interactive technology with young children.
A year later, still without a smartphone or tablet, I feel much more confident about recommending apps to patrons, reviewing and evaluating apps, and building our collection, and you can too! You already have the tools for evaluating media that meets children’s developmental needs and creating interesting and attractive advisory methods for families. The next step is simply taking it to a new platform!
A short list of tweets from the past week of interest to teens and the library staff that work with them.
Do you have a favorite Tweet from the past week? If so add it in the comments for this post. Or, if you read a Twitter post between September 12 – September 18 that you think is a must for the next Tweets of the Week send a direct or @ message to lbraun2000 on Twitter.
From mechanical turks to science fiction novels, our mobile phones to The Terminator, we’ve long been fascinated by machine intelligence and its potential — both good and bad. We spoke to philosopher Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, about a number of pressing questions surrounding artificial intelligence and its potential impact on society.
Are we living with artificial intelligence today?
Mostly we have only specialized AIs – AIs that can play chess, or rank search engine results, or transcribe speech, or do logistics and inventory management, for example. Many of these systems achieve super-human performance on narrowly defined tasks, but they lack general intelligence.
There are also experimental systems that have fully general intelligence and learning ability, but they are so extremely slow and inefficient that they are useless for any practical purpose.
AI researchers sometimes complain that as soon as something actually works, it ceases to be called ‘AI’. Some of the techniques used in routine software and robotics applications were once exciting frontiers in artificial intelligence research.
What risk would the rise of a superintelligence pose?
It would pose existential risks – that is to say, it could threaten human extinction and the destruction of our long-term potential to realize a cosmically valuable future.
Would a superintelligent artificial intelligence be evil?
Hopefully it will not be! But it turns out that most final goals an artificial agent might have would result in the destruction of humanity and almost everything we value, if the agent were capable enough to fully achieve those goals. It’s not that most of these goals are evil in themselves, but that they would entail sub-goals that are incompatible with human survival.
For example, consider a superintelligent agent that wanted to maximize the number of paperclips in existence, and that was powerful enough to get its way. It might then want to eliminate humans to prevent us from switching if off (since that would reduce the number of paperclips that are built). It might also want to use the atoms in our bodies to build more paperclips.
Most possible final goals, it seems, would have similar implications to this example. So a big part of the challenge ahead is to identify a final goal that would truly be beneficial for humanity, and then to figure out a way to build the first superintelligence so that it has such an exceptional final goal. How to do this is not yet known (though we do now know that several superficially plausible approaches would not work, which is at least a little bit of progress).
How long have we got before a machine becomes superintelligent?
Nobody knows. In an opinion survey we did of AI experts, we found a median view that there was a 50% probability of human-level machine intelligence being developed by mid-century. But there is a great deal of uncertainty around that – it could happen much sooner, or much later. Instead of thinking in terms of some particular year, we need to be thinking in terms of probability distributed across a wide range of possible arrival dates.
So would this be like Terminator?
There is what I call a “good-story bias” that limits what kind of scenarios can be explored in novels and movies: only ones that are entertaining. This set may not overlap much with the group of scenarios that are probable.
For example, in a story, there usually have to be humanlike protagonists, a few of which play a pivotal role, facing a series of increasingly difficult challenges, and the whole thing has to take enough time to allow interesting plot complications to unfold. Maybe there is a small team of humans, each with different skills, which has to overcome some interpersonal difficulties in order to collaborate to defeat an apparently invincible machine which nevertheless turns out to have one fatal flaw (probably related to some sort of emotional hang-up).
One kind of scenario that one would not see on the big screen is one in which nothing unusual happens until all of a sudden we are all dead and then the Earth is turned into a big computer that performs some esoteric computation for the next billion years. But something like that is far more likely than a platoon of square-jawed men fighting off a robot army with machine guns.
If machines became more powerful than humans, couldn’t we just end it by pulling the plug? Removing the batteries?
It is worth noting that even systems that have no independent will and no ability to plan can be hard for us to switch off. Where is the off-switch to the entire Internet?
A free-roaming superintelligent agent would presumably be able to anticipate that humans might attempt to switch it off and, if it didn’t want that to happen, take precautions to guard against that eventuality. By contrast to the plans that are made by AIs in Hollywood movies – which plans are actually thought up by humans and designed to maximize plot satisfaction – the plans created by a real superintelligence would very likely work. If the other Great Apes start to feel that we are encroaching on their territory, couldn’t they just bash our skulls in? Would they stand a much better chance if every human had a little off-switch at the back of our necks?
So should we stop building robots?
The concern that I focus on in the book has nothing in particular to do with robotics. It is not in the body that the danger lies, but in the mind that a future machine intelligence may possess. Where there is a superintelligent will, there can most likely be found a way. For instance, a superintelligence that initially lacks means to directly affect the physical world may be able to manipulate humans to do its bidding or to give it access to the means to develop its own technological infrastructure.
One might then ask whether we should stop building AIs? That question seems to me somewhat idle, since there is no prospect of us actually doing so. There are strong incentives to make incremental advances along many different pathways that eventually may contribute to machine intelligence – software engineering, neuroscience, statistics, hardware design, machine learning, and robotics – and these fields involve large numbers of people from all over the world.
To what extent have we already yielded control over our fate to technology?
The human species has never been in control of its destiny. Different groups of humans have been going about their business, pursuing their various and sometimes conflicting goals. The resulting trajectory of global technological and economic development has come about without much global coordination and long-term planning, and almost entirely without any concern for the ultimate fate of humanity.
Picture a school bus accelerating down a mountain road, full of quibbling and carousing kids. That is humanity. But if we look towards the front, we see that the driver’s seat is empty.
Featured image credit: Humanrobo. Photo by The Global Panorama, CC BY 2.0 via Flickr
September is the month when lots of teenagers in the UK move on, leaving home for college or gap years or other adventures.
The growing-up may have felt, at times, like very long years, so rejoice now that change has arrived at last
Rejoice, for a moment, in what you’re losing. All those late arrivals and sudden slam-door exits, the too-much too-loud music or grunts-plus-earphones; the washing machine full of dirty clothes; the presence of unknown bodies sleeping on living room floors and sofas; the big screens and small screens constantly flickering with fascinating stuff, and more.
Aha! Soon you’ll be nostalgic for bathrooms stacked with more grooming products than can be daubed on one person in a lifetime, Even so, it will also feel very good to reclaim some of the space that you knew was once there.
However, before it’s too late, be aware of what you will be losing too. Especially if you’re a freelance loner working from home.The person who is probably your most valuable technical resource is leaving. Not only will all that precious and vital energy disappear - and no, I'm not joking! - but so will all their random knowledge, skills and fluency with all things technical.
From the moment that door closes, you will be relying on your own knowledge - and how does that stand up right now, all by itself?
I have no precious teen tech around right now. I have no handy geek or wizard who can help me with the latest social media trends, no person who can explain how to do the things I want to do, or the thing I don't know I should know about.
I don’t sit there bleating (even if this post may seem so.)
I ask, I enquire, I go to the on-line videos and follow the simple steps. I google for answers, try things out and solve problems.
But, but, but . . . so often I find a gap where an essential bit of information should be.
Yes, the screen can show me “this” but what about the “that” that goes with it? The missing link that takes such hours to discover, the reason behind x or y? I 'd really like to borrow a socialised techno-wise human being for a week or three, please. Aaagh!
Maybe you are lucky? Maybe you are young yourself or you work outside home and have easy access, not only to training but to the casual wisdom of facts being passed on and gadgets explained.
If not, be warned.
If you work at your writing at home, alone, from now on you’ll be battling with new media and new work at the same time, and there's not many hours to go round.
Be nice to your nerds while you’ve got them. Today is the first of September. You’ve got about two weeks to download all they know.
This week—August 15, to be exact—celebrates the climax of Air Conditioning Appreciation Days, a month-long tribute to the wonderful technology that has made summer heat a little more bearable for millions of people. Census figures tell us that nine out of ten Americans have central air conditioning, or a window unit, or more than one, in our homes; in our cars, it’s nearly universal. Go to any hardware or home goods store and you’ll see a pile of boxes containing no-fuss machines in a whole range of sizes, amazingly affordable, plop-’em-in-the-window-and-plug-’em-in-and-you’re-done. Not only do we appreciate the air conditioner, but we appreciate how easy it is to become air conditioned.
When it comes to cool, we’ve come a long way. But in earlier times, it was nowhere near as simple for ordinary citizens to get summertime comfort.
One of the first cooling contraptions offered to the public showed up around 1865, the brainchild of inventor Azel S. Lyman: Lyman’s Air Purifier. This consisted of a tall, bulky cabinet that formed the headboard of a bed, divided into various levels that held ice to cool the air, unslaked lime to absorb humidity, and charcoal to absorb “minute particles of decomposing animal and vegetable matter” as well as “disgusting gases.” Relying on the principle that hot air rises and cool air sinks, air would (theoretically) enter the cabinet under its own power, rise to encounter the ice, be dried by the lime, purified by the charcoal, and finally ejected—directly onto the pillow of the sleeper—“as pure and exhilarating as was ever breathed upon the heights of Oregon.” Lyman announced this marvel in Scientific American, and in the same issue ran an advertisement looking for salesmen. Somehow the Air Purifier didn’t take off.
More interesting to homeowners was the device that showed up in 1882, the electric fan. Until then, fans were powered by water or steam, usually intended for public buildings rather than homes, and most of them tended to circulate air lazily. But the electric model was quite different, with blades that revolved at 2,000 rpm—“as rapidly as a buzz saw,” observed one wag, and for years they were nicknamed “buzz” fans. They were some of the very first electrically powered appliances available for sale. They were also exorbitant, costing $20 (in modern terms, about $475). But that didn’t stop the era’s big spenders from seizing upon them eagerly. Delighted reviewers of the electric fan claimed that it was “warranted to lower the temperature of a room from ninety-five to sixty degrees in a few minutes” and that its effect was “like going into a cool grove.”
The fan combined with ice around the turn of the century, producing an eight-foot-tall metal object that its inventor called “The NEVO, or Cold Air Stove.” The principle was simple: air entered through a small pipe at the top, was pulled by a fan through the NEVO’s body—which had to be stuffed daily with 250 pounds of ice and salt to provide the cooling—and would then be discharged out an opening at the bottom. “It dries, washes, and purifies the air.” As the NEVO had more in common with a gigantic ice cream freezer than with actual temperature control, and the smallest NEVO cost $80 (nowadays, $1,700) and cost $100 per season (over $2,000) to operate, it didn’t get far.
By this time, a young engineer named Willis Carrier had developed a mechanical system that could actually cool the air and dry it, the Apparatus for Treating Air. But this was machinery of the Giant Economy Size, and used only in factories. In 1914, one wealthy gent asked Carrier to install a system in his new forty-bedroom Minneapolis home, and indeed the system was the same type that “a small factory” would use. Unfortunately, this proud homeowner died before the house was completed, and historians speculate that the machinery was never even turned on.
It wasn’t until 1929 that Frigidaire announced the first home air conditioner, the Frigidaire Room Cooler. This wasn’t in any way a lightweight portable. The Room Cooler consisted of a four-foot-tall metal cabinet, weighing 200 pounds, that had to be connected by pipes to a separate 400-pound compressor (“may be located in the basement, or any convenient location”). And it cost $800, in those days the same as a Pontiac roadster. While newspaper and magazine articles regarded the Room Cooler as a hot-weather miracle, the price (along with the setup requirements) meant that its customers came almost solely from the ranks of the rich, or businesses with cash to burn. Then fate intervened only months after the Room Cooler’s introduction when the stock market crashed, leaving very little cash for anyone to burn. Home air conditioning would have to wait until the country climbed back from the Depression.
Actually, it waited until the end of World War II, when the postwar housing boom prompted brand-new homeowners to fill their houses with the latest comforts. Along with television, air conditioning was at the top of the wish list. And at last, the timing was right; manufacturers were able to offer central cooling, as well as window units, at affordable prices. The compressor in the backyard, or the metal posterior droning out the window, became bona fide status symbols. By 1953, sales topped a million units—and the country never looked back.
A large variety of complex systems in ecology, climate science, biomedicine, and engineering have been observed to exhibit so-called tipping points, where the dynamical state of the system abruptly changes. Typical examples are the rapid transition in lakes from clear to turbid conditions or the sudden extinction of species after a slightly change of environmental conditions. Data and models suggest that detectable warning signs may precede some, though clearly not all, of these drastic events. This view is also corroborated by recently developed abstract mathematical theory for systems, where processes evolve at different rates and are subject to internal and/or external stochastic perturbations.
One main idea to derive warning signs is to monitor the fluctuations of the dynamical process by calculating the variance of a suitable monitoring variable. When the tipping point is approached via a slowly-drifting parameter, the stabilizing effects of the system slowly diminish and the noisy fluctuations increase via certain well-defined scaling laws.
Based upon these observations, it is natural to ask, whether these scaling laws are also present in human social networks and can allow us to make predictions about future events. This is an exciting open problem, to which at present only highly speculative answers can be given. It is indeed to predict a priori unknown events in a social system. Therefore, as an initial step, we try to reduce the problem to a much simpler problem to understand whether the same mechanisms, which have been observed in the context of natural sciences and engineering, could also be present in sociological domains.
In our work, we provide a very first step towards tackling a substantially simpler question by focusing on a priori known events. We analyse a social media data set with a focus on classical variance and autocorrelation scaling law warning signs. In particular, we consider a few events, which are known to occur on a specific time of the year, e.g., Christmas, Halloween, and Thanksgiving. Then we consider time series of the frequency of Twitter hashtags related to the considered events a few weeks before the actual event, but excluding the event date itself and some time period before it.
Now suppose we do not know that a dramatic spike in the number of Twitter hashtags, such as #xmas or #thanksgiving, will occur on the actual event date. Are there signs of the same stochastic scaling laws observed in other dynamical systems visible some time before the event? The more fundamental question is: Are there similarities to known warning signs from other areas also present in social media data?
We answer this question affirmatively as we find that the a priori known events mentioned above are preceded by variance and autocorrelation growth (see Figure). Nevertheless, we are still very far from actually using social networks to predict the occurrence of many other drastic events. For example, it can also be shown that many spikes in Twitter activity are not predictable through variance and autocorrelation growth. Hence, a lot more research is needed to distinguish different dynamical processes that lead to large outburst of activity on social media.
The findings suggest that further investigations of dynamical processes in social media would be worthwhile. Currently, a main focus in the research on social networks lies on structural questions, such as: Who connects to whom? How many connections do we have on average? Who are the hubs in social media? However, if one takes dynamical processes on the network, as well as the changing dynamics of the network topology, into account, one may obtain a much clearer picture, how social systems compare and relate to classical problems in physics, chemistry, biology and engineering.
This summer, ALA’s Office for Information Technology Policy and Office for Intellectual Freedom released a policy brief marking a decade of school and public libraries limiting patrons’ access to online information due to the Children’s Internet Protection Act (CIPA).
Since 2003, those schools and public libraries that accept federal funding to purchase internet access have been required by CIPA to use filtering software on all of their internet-enabled computers. This filtering software must block access to images classified as “obscene,” “child pornography,” or “harmful to minors.” Any adult wishing to access material blocked under the auspices of “harmful to minors” is backed in his/her quest for content by the First Amendment and CIPA, which requires that the material be unblocked by the school or library. The first two categories (“obscene” and “child pornography” images) are not similarly protected under the First Amendment, so schools and libraries are not required to unblock those materials.
In theory, CIPA is fairly unobjectionable: none of us want to provide materials harmful to minors, child pornography, or obscenity. In practice, however, schools and libraries have applied CIPA in a draconian fashion: over-filtering for fear of patrons finding objectionable materials and for fear of losing federal funding. Going above and beyond CIPA’s filtering guidelines has resulted in egregious bans on social media, gaming, and emerging sites; nursing exams and other health information being blocked; embarrassment and confusion for patrons; and a negative public perception of technology at the library. Furthermore, as Fencing Out Knowledge states, over-filtering does a disservice to our 21st century learners, is contrary to ALA’s Bill of Rights, and disproportionately affects the economically disadvantaged.
Fortunately, the report includes four recommendations for ALA to take action on this CIPA-originating issue of over-filtering. The recommendations are:
Increase awareness of the spectrum of filtering choices.
Develop a toolkit for school leaders.
Establish a digital repository of internet filtering studies.
Conduct research to explore the educational use of social media platforms and assess the impact of filtering in schools.
While ALA tackles those items at a national level, in your own community you can advocate for young adults’ broad access to the internet by becoming familiar with CIPA’s requirements; educating yourself on the harms of over-filtering; and advocating for digital policies that best fit your school or library mission and your teenagers’ 21st century needs. Don’t wait for ALA to finish their action items! Start the new school year by coming to the table now. Read the report as soon as possible, and become a consistent, professional voice at your school or library’s Technology Committee.
Facebook celebrated its tenth anniversary in February. It has over 1.2 billion active users — equating to one user for every seven people worldwide. This social networking phenomenon has not only given our society a new way of sharing information with others; it’s changed the way we think about “liking” and “friending.” Actually, “friending” was not even considered a proper word until Facebook popularized its use. Traditionally, a friend is not just a person one knows, but a person with whom one shares personal affection, connection, trust, and familiarity. Under Facebook-speak, friending is simply the act of attaching a person to a contact list on the social networking website. One does not have to like, trust, or even know people in order to friend them. The purpose of friending is to connect people interested in sharing information. Some people friend only “traditional friends.” Others friend people on Facebook who are “mere acquaintances,” business associates, and even people with whom they have no prior relationship. On Facebook, “liking” is supposed to indicate that the person enjoys or is partial to the story, photo, or other content that someone has posted on Facebook. One does not have to be a friend to like someone’s content, and one may also like content on other websites.
Unbeknown to many Facebook users is how Facebook and other websites gather and use information about people’s friending and liking behaviors. For instance, the data gathered by Facebook is used to help determine which advertisements a particular user sees. Although Facebook does have some privacy protection features, many people do not use them, meaning that they are sharing private information with anyone who has access to the Internet. Even if a person tries to restrict information to “friends,” there are no provisions to ensure that the friends to not share the information with others, posting information in publically accessible places or simply sharing information in a good, old-fashioned manner – oral gossip. So, given what we know (and perhaps don’t know) about liking and friending, should social workers like their clients, encourage clients to like them, or friend their clients?
When considering the use of online social networking, social workers need to consider their ethical duties with respect to their primary commitment to clients, their duty to maintain appropriate professional boundaries, and their duty to protect confidential client information (NASW, Code of Ethics, 2008, Standards 1.01, 1.06, and 1.07). Allow me to begin with the actual situation that instigated my thinking about these issues. Recently, I saw a social worker’s Facebook page advertising her services. She encouraged potential clients to become friends and to like her. She offered a 10% discount in counseling fees for clients who liked her. What could possibly be a problem with providing clients with this sort of discount? The worker was providing clients with a benefit, and all they had to do was like her… they didn’t even have to become her friend.
In terms of 1.01, the social worker should ask herself whether she was acting in a way that promoted client interests, or whether she was primarily promoting her own interests. If her decision to offer discounts was purely a decision to promote profits (her interests), then she may be taking advantage (perhaps unintentionally) of her clients. If her clients were receiving benefits that outweighed the costs and risks, then she may be in a better position to justify the requests for friends and likes.
With regard to maintaining appropriate boundaries, the worker should ask how clients perceive her requests for friends and likes. Do clients understand that the requests are in the context of maintaining a professional relationship, or might terms such as friending and liking blur the distinctions between professional and social relationships? If she truly wants to know whether clients value her services (as opposed to like), perhaps she should use a more valid and reliable measurement of client satisfaction or worker effectiveness. There are no Likert-type scales when it comes to liking on Facebook. You can only “like” or “do nothing.”
Confidentiality presents perhaps the most difficult issues when it comes to liking and friending. When a client likes a social worker who specializes in gambling addiction, for instance, does the client know that he may start receiving advertisements for gambling treatment services… or perhaps for casinos, gambling websites, or racetracks? Who knows what other businesses might be harvesting online information about the client. “OMG!” Further, does the client realize that the client’s Facebook friends will know the client likes the social worker? Although the client is not explicitly stating he is a client, others may draw this conclusion – and remember, these “others” are not necessarily restricted to the client’s trusted confidantes. They may include co-workers, neighbors, future employers, or others who may not hold the client’s best interests to heart.
One could say it’s a matter of consent – the worker is not forcing the client to like her, so liking is really an expression of the client’s free will. All sorts of businesses offer perks to people who like or friend them. Shouldn’t clients be allowed to pursue a discount as long as they know the risks? Hmmm… do they know the actual risks? Do they know that what seems like an innocuous act – liking – may have severe consequences one day? Consider, is it truly an expression of free will if the worker is using a financial incentive – particularly if clients have very limited income and means to pay for services? Further, young children and people with dementia or other mental conditions may not have the capacity to understand the risks and make truly informed choices.
Digital natives (people born into the digital age) might say these are the ramblings of an old curmudgeon (ok, they probably woudn’t use the term curmudgeon). When considering the ethicality of social work behaviors, we need to consider context. The context of Facebook, for instance, includes a culture where sharing seems to be valued much more than privacy. Many digital natives share intimate details of their life without grave concerns about their confidentiality. They have not experienced negative repercussions from posting details about their intimate relationships, break-ups, triumphs, challenges, and even embarrassments. They may not view liking a social worker’s website any riskier than liking their favorite ice cream parlor. So, to a large segment of Facebook users, is this whole issue much ado about nothing?
In the context of Internet risks, there are far more severe concerns than social workers asking clients to like them on Facebook. Graver Internet risks include cyber-bulling, identity theft, and hacking into national defense, financial institutions, and other important systems that are vulnerable to cyber-terrorism. Still, social workers should be cautious about asking clients to like them… on Facebook or otherwise.
The Internet offers social workers many different approaches to communicating with clients. Online communication should not be feared. On the other hand, social workers should consider all potential risks and benefits before making use of a particular online communication strategy. Social work and many other helping professions are still grappling with the ethicality of various online communication strategies with clients. What is hugely popular now – including Facebook – may continue to grow in popularity. However, with time and experience, significant risks may be exposed. Some technologies may lose popularity, and others may take their place.
Headline image credit: Internet icons and symbols. Public domain via Pixabay.
Owls and robots. Nature and computers. It might seem like these two things don’t belong in the same place, but The Unfinished Fable of the Sparrows (in an extract from Nick Bostrom’s Superintelligence) sheds light on a particular problem: what if we used our highly capable brains to build machines that surpassed our general intelligence?
It was the nest-building season, but after days of long hard work, the sparrows sat in the evening glow, relaxing and chirping away.
“We are all so small and weak. Imagine how easy life would be if we had an owl who could help us build our nests!”
“Yes!” said another. “And we could use it to look after our elderly and our young.”
“It could give us advice and keep an eye out for the neighborhood cat,” added a third.
Then Pastus, the elder-bird, spoke: “Let us send out scouts in all directions and try to find an abandoned owlet somewhere, or maybe an egg. A crow chick might also do, or a baby weasel. This could be the best thing that ever happened to us, at least since the opening of the Pavilion of Unlimited Grain in yonder backyard.”
The flock was exhilarated, and sparrows everywhere started chirping at the top of their lungs.
Only Scronkfinkle, a one-eyed sparrow with a fretful temperament, was unconvinced of the wisdom of the endeavor. Quoth he: “This will surely be our undoing. Should we not give some thought to the art of owl-domestication and owl-taming first, before we bring such a creature into our midst?”
Replied Pastus: “Taming an owl sounds like an exceedingly difficult thing to do. It will be difficult enough to find an owl egg. So let us start there. After we have succeeded in raising an owl, then we can think about taking on this other challenge.”
“There is a flaw in that plan!” squeaked Scronkfinkle; but his protests were in vain as the flock had already lifted off to start implementing the directives set out by Pastus.
Just two or three sparrows remained behind. Together they began to try to work out how owls might be tamed or domesticated. They soon realized that Pastus had been right: this was an exceedingly difficult challenge, especially in the absence of an actual owl to practice on. Nevertheless they pressed on as best they could, constantly fearing that the flock might return with an owl egg before a solution to the control problem had been found.
A short list of tweets from the past week of interest to teens and the library staff that work with them.
Do you have a favorite Tweet from the past week? If so add it in the comments for this post. Or, if you read a Twitter post between August 29 and September 4 that you think is a must for the next Tweets of the Week send a direct or @ message to lbraun2000 on Twitter.