What is JacketFlap

  • JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans.
    Join now (it's free).

Sort Blog Posts

Sort Posts by:

  • in
    from   

Suggest a Blog

Enter a Blog's Feed URL below and click Submit:

Most Commented Posts

In the past 7 days

Recent Posts

(tagged with 'Evaluations')

Recent Comments

Recently Viewed

JacketFlap Sponsors

Spread the word about books.
Put this Widget on your blog!
  • Powered by JacketFlap.com

Are you a book Publisher?
Learn about Widgets now!

Advertise on JacketFlap

MyJacketFlap Blogs

  • Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.

Blog Posts by Tag

In the past 7 days

Blog Posts by Date

Click days in this calendar to see posts by day or month
new posts in all blogs
Viewing: Blog Posts Tagged with: Evaluations, Most Recent at Top [Help]
Results 1 - 1 of 1
1. 30 Days of Teen Programming: Evaluate Outcomes

Admission time: like many of us in Library Land, I am still figuring out the best ways to measure program outcomes. Marking attendance is relatively easy (although to be fair, sometimes the teens move do around a lot, which can make them tricky to count). It's a bit harder to identify the changes I want to see as a result of my program, and then accurately measure those changes.

The Programming Guidelines ask us to "Engage in youth-driven, evidence-based evaluation and outcome measurement." I'm not quite there yet. As I mentioned in my post about our weekly drop-in, we've been working with participants in that program to identify priorities, and now we're moving towards evaluations that will measure whether those priorities are being met. But it's still a work in progress.

What I have gotten better at is working with community partners to create evaluations for programs. For example, we regularly work collaborate with Year Up to build their students' information and digital literacy skills. Before each workshop, we meet with Year Up staff to make sure that we'll be teaching the skills they want participants to gain. Collaborating with partners on our evaluations and learning from them about their own evaluation methods has made a huge difference in the quality of our evaluations overall.

At Year Up, I give the students pre- and post-tests to see how much our classes are moving the needle on desired skills and knowledge. We send Year Up staff an early draft of the tests (same questions for both) and incorporate their feedback in the final evaluation tool. Seems foolproof, right?

Year Up dataWell, here's a graph I made from the results of an earlier incarnation of those pre- and post-tests. Can you spot the problem(s)?

Library jargon. Words like "catalog" and "keywords" muddied the results, because (especially before the workshop) students didn't really know what those words meant. My vague question about whether "all the world's knowledge" is available via Google wasn't great either. Students figured that the answer was probably "no"--because of course librarians hate Google. (I don't, honest!) As I phrased it, the question didn't measure the movement I saw in their understanding of WHY a lot of the world's best info isn't available on Google. (Which as we all know is about money, honey.)

This wasn't the best evaluation tool. The next time I created a survey for Year Up, I drastically rewrote the questions. But that's okay! This survey did measure some outcomes--e.g., a huge increase in library resource knowledge among participants. And I learned some pitfalls to avoid next time.

I'm a big fan of giving myself permission to fail, and I take myself up on it a lot--especially when it comes to measuring outcomes. The important thing is to learn and adjust, and get better data next time.

 

 

 

Add a Comment