Category Archives: Uncategorized

Creating Knowledge Conference 2021

On June 3rd and 4th, several of us from the university library here at UiT in Tromsø organized the 10th Creating Knowledge conference! The theme is information literacy, and the conference has been held approximately every other year, since 1999, in one of the Nordic countries.  Creating Knowledge conferences are arranged by NordINFOLIT, a Nordic collaborative forum for information literacy.

The conference was supposed to be held last year, but along came corona, so we postponed one year. Unfortunately, it wasn’t safe enough to have a physical conference in 2021 either, so we made the tough decision to have a digital conference. We considered hybrid, mostly digital but with some physical participants, but realized that this would be too complicated. It’s a shame because we really wanted to welcome guests here to exotic, beautiful Tromsø during the magical summer with midnight sun!

I’m trying to think positive though… For the organizing committee, one advantage of a purely digital conference is that we didn’t have to organize hotels, meals, coffee breaks, activities or billing. Another advantage is that it was accessible to everyone, especially since we didn’t charge anything. We had over 600 participants, much more than the usual 150-200, and some from as far away as New Zealand. (These were the women that I was supposed to work with there on my 2-month research stay, but of course that didn’t happen either…)

A major disadvantage of a purely digital conference is of course that you can’t meet people in person. Networking is an important part of conferences. To compensate for this we had a digital “lounge” where people could chat together informally.

We had 4 wonderful keynotes:

1. Karen Douglas (Professor of Social Psychology at the University of Kent, UK), who talked about the psychology of conspiracy theories. This is a very relevant topic these days, and quite controversial, so we weren’t permitted to record her talk.

2. Roger Säljö (well-known researcher of learning in Scandinavia), with an excellent keynote called “Learning in a designed world: Information literacy from rock carvings to apps”.

3. Jane Secker (Senior Lecturer in Educational Development at City University, London, and information literacy guru!), with her inspiring, informative keynote on “Frames, models and definitions: Rethinking information literacy for the digital age”.

4. Tove Dahl (educational psychologist at UiT, and most importantly, my main supervisor!), with her amazingly inspiring keynote called “What if being or becoming information literate were an adventure?”

The last three of these keynotes were recorded and will very soon be available on our conference website. If you’re feeling a little stuck or unmotivated in your teaching, be sure to see Tove’s keynote about tigers!

Many participants presented papers or held round-table discussions, and we had 2 or 3 parallel tracks to chose from during the two-day conference. This was all a bit challenging technically, but our competent organizing-committee members, together with tech support from RESULT, managed the Zoom-rooms perfectly. Everything worked!

Torstein and I presented research from our article about measuring IL on Friday. It felt really strange to sit alone in my office and present live to over 150 people! We presented right after my acquaintances from New Zealand, and since the talk after us was cancelled, the four of us used the opportunity to discuss IL-assessment for 30 minutes with anyone interested – and there were several of them – so that was great! I was a bit nervous – mostly for technical stuff – but everything went smoothly. It wasn’t as scary as presenting in person. As a PhD-student, I got 2 credits for presenting at an international conference, which was a nice bonus. 🙂

I was also chair for a session with four presentations – a first for me. I’d written down some questions for each presentation, just in case no one else wrote questions in the chat, and that was a good thing!

Between papers and keynotes were prerecorded performances from the choir TAKk. The women sang outdoors on a cold, windy “summer” day, in beautiful locations around Tromsø. Other cultural contributions were a presentation of a collection of old maps entitled “Creating Knowledge of the Far North: The earliest printed maps as icons of (mis)information“, a fun talk entitled “Tromsø: A likely city in an unlikely place“, and a slideshow (my photos) with beautiful scenery in the Tromsø area (interspersed with slides with conference info).

Although I didn’t do nearly as much as some of the other members of the organizing committee (especially Helene, Torstein, Mariann and Mark), it was a lot of work! It got especially intense the week of the conference – I didn’t do anything else.  And of course, there were plenty of last minute changes, including cancelled presentations, that we had to constantly deal with during the conference.

I was impressed with the quality of most of the papers that I heard, and with the enthusiasm of conference delegates. However, since I had several roles during the conference, including answering e-mails from delegates with various problems that needed a quick fix, I didn’t actually get to listen to many of the presentations. This was unfortunate, since I was interested in nearly everything. Papers in the parallel sessions weren’t recorded, and aren’t accessible. Oh well – it was exciting and instructive to help organize the conference, at least! And evaluations we’ve received from delegates so far have mainly been positive, despite the fact that the conference was digital.

🙂

First article for PhD published!

When it rains, it pours!

After not having much to report on the past several months, the last couple of weeks have been full of exciting events. I’ll start with getting the first article for my PhD published yesterday in the Journal of Information Literacy.  🙂 Here’s the reference:

Nierenberg, E., Låg, T., & Dahl, T. I. (2021). Knowing and doing: The development of information literacy measures to assess knowledge and practice. Journal of Information Literacy, 15(2), pp. 78–123.
http://dx.doi.org/10.11645/15.2.2795

As you can see, I wrote the article together with Torstein Låg (co-supervisor) and Tove Dahl (main supervisor) – what a team! 🙂 The work behind it was extensive, both intellectually challenging and time-consuming. I’m quite proud of the finished product. It feels good to have gotten this far! Thank you Torstein and Tove!

The article begins by describing the development and use of three tools for assessing IL in students. These explain why the article is called “Knowing and doing”:

  1. a 21-item multiple-choice test, covering seeking, evaluating and using information sources (what they know)
  2. an annotated bibliography to assess students’ skills in evaluating information sources in an authentic, graded assignment (what they do)
  3. a rubric for assessing students’ use of sources in their academic writing, again using an authentic, graded assignment (what they do)

In addition to describing the comprehensive procedures used to develop these measures (including evaluating them for reliability and validity), we also discuss the results we obtained when utilizing them to measure IL in undergraduate and graduate students.

The article continues with a discussion about the association between IL knowledge and skills – is what students know about IL reflecting in what they do in practice? Spoiler alert – it turns out that in some cases, there is a significant correlation between the two, but the correlation is not strong. This means that there are other factors, in addition to students’ IL-knowledge, that contribute to their skills.

We then discuss the dimensionality of the IL construct. Is IL actually a coherent, unitary construct, or is it heterogeneous? (In other words, is information literacy actually one thing, or several things?) Spoiler alert 2 – our findings show that it is heterogeneous, composed of many facets. (Perhaps we should call it information literacies?) This finding has many important implications – read the article to learn more! 🙂

Torstein and I presented the research behind this article 5 days ago at the Creating Knowledge conference, which we also helped to organize. My next blog post, which I hope will be posted very soon, will be about this wonderful conference, and what it was like to organize and host an international, digital conference with over 600 delegates.

 

Events in the US are proof of the importance of information literacy!

The violent storming of the US Capitol in Washington DC yesterday, during the electoral college confirmation of Joe Biden and Kamala Harris as President and Vice President of the United States, shows why being information literate is important. Disinformation being spread in the US, particularly by the current president, incited mobs in a dangerous and disruptive insurrection, the likes of which have not been seen since 1814.

One of the main tenets of information literacy (IL) is that we should be critical to our sources of information, and use those that are reliable. But that’s difficult when the President of the United States (POTUS) – arguably the most powerful person in the world – spreads conspiracy theories and other disinformation about how the election “was stolen” from him. The internet provides multiple platforms for this disinformation to spread instantaneously, providing an echo chamber for Trump supporters to reinforce their beliefs.

POTUS’s followers get their information mainly from biased, conservative channels like Fox News and Breitbart News, social media, QAnon (supports fringe conspiracy theories), and from POTUS himself. Trump’s megaphones, Twitter and Facebook, have now locked his accounts for 12 hours to prevent the spread of his lies and his encouragement to those rioting. POTUS is being censored.

Trump calls media “thieves and crooks,” sowing distrust in reliable newspapers like the New York Times and the Washington Post, and encouraging his supporters to rely on him for their information – an obvious characteristic of authoritarianism. This is dangerous for a democracy, where citizens vote for their government representatives based on the information they read and hear.

This sad chapter in American history can thereby be blamed on ignorance, caused by poor information literacy skills. Too many citizens have relied on biased sources of information. Perhaps, if people had consulted more reliable sources of information instead of believing blindly in a delusional president, the events of the past 12 hours could have been prevented.

First article accepted for publication!

The first article for my PhD has been accepted for publication in the Journal of Information Literacy! 🙂 I wrote the article, called “Knowing and doing: The development of information literacy measures to assess knowledge and practice,” together with Torstein and Tove.  It’ll be published in the June, 2021 issue.

You may remember my blog post from August 21 called “Peer review of my first (attempted) article,” in which I expressed how discouraging it was to receive a review that was several pages long, and required major revisions in this article. I’d thought at the time that it was quite alright as it was. However, the article is much better now, after the revision. So although it was a lengthy  process, it was well worth it.

The reviewers did a thorough job, and asked really good questions. We went through every comment, and either revised the article accordingly or argued for why we didn’t agree that the change was necessary. As we worked we wrote a detailed reply to the reviewers, so they could easily find the right spot, and see our reasoning.

The reviewers wrote that the framing of the article is now much clearer, and that the paper as a whole is “more consistent and focused, resulting in a much stronger article overall.” They believe that with this article, we’ve made a significant contribution to the conversation about how we think of the information-literacy-construct .

🙂

 

 

Midway assessment for my PhD

Today I had my midway assessment, a milestone for every PhD student. 🙂

This is how UiT describes it: “The midway assessment shall provide the student and supervisor with an independent assessment –
evaluating whether the student has adequate progression to complete the PhD education according to schedule. The student shall receive specific feedback on his/her work so far, and get suggestions for the further work. The midway assessment provides the department with an opportunity to discern students that need structured follow-up. It is expected that such an assessment will improve the progress of the project, and increase the likelihood that the student completes the course of study within prescribed time.”

I sent in several documents ahead of time, and presented my research today to a committee (one professor from UiT and one from the University of Oslo), and to my 3 supervisors. Because of the pandemic, everything was on Zoom.

After the presentation we discussed my research, and I received lots of useful feedback that will help with the rest of my project. The professor from UiO is an expert in quantitative methods, in the field of education/special education. She had several good arguments for why I should include qualitative methods in my research:

  • Information literacy, by nature, is a field that is also qualitative, and shouldn’t only be explored quantitatively (although this is also a useful contribution).
  • If I want to publish articles in more general, educational journals, with larger visibility and more impact than those in the information literacy niche, I should use other kinds of analyses. Not just quantitative. I could use “mixed methods.”
  • If I only use quantitative methods, everything I write will be peer-reviewed by statisticians, and they can be very demanding, and perhaps concentrate more on the numbers than on the implications of the findings.
  • The analyses and statistics involved in doing a longitudinal study (which I’m in the process of doing) are extremely complex, and it can take years to master them.

She also encouraged me to compare students’ scores on the IL tests/measures, to outcome measures such as grades and completion of college degrees. That would make my research more interesting and relevant.

This was good advice, and I really appreciate that she used so much time and effort to evaluate my work. 🙂 It was incredibly useful to get input from an external expert, who was previously uninvolved in my research, and who could examine it through a new lens.

Of course it was hard for me to hear that I’m slightly off-track, but it’s better to hear it now than even later in the game, I guess… Although it will be challenging to change my direction at this point, it’s probably wise. (And after I’ve digested this newest input for a little longer than 3 hours,  I’ll probably be even more convinced.) My study design has already gone through several revisions, so why not one more?

I’ve come to realize that doing a PhD means being constantly confronted with new intellectual challenges and continual revisions in plans. It often feels like my brain is doing somersaults, which somehow keep me on my feet. 😉

A big thank you to the two professors on my committee and to my three wonderful supervisors! 🙂 I feel privileged, humbled and grateful, once again.

Peer review of my first (attempted) article

 

Three months and two days after I submitted the first article for my PhD to a journal for publication, I finally got a response from the peer-reviewer(s). It wasn’t exactly as I’d hoped.

The editor first pointed out that the article (written together with 2 of my supervisors) was interesting, well-written, and relevant to the journal’s scope, but then wrote about aspects that need to be addressed. These range from the framing of the article, to the methods and statistics. The reviewers’ comments were attached.

The editor then wrote “If you are willing to revise the work along the suggested lines, we would be pleased to receive a resubmitted version for review.” I’m not sure what this actually means though. Would the new version have to go through an entirely new peer-review that could take another 3 months, and possibly be rejected again? Or would it be re-evaluated by the same reviewers, controlled only for the changes they suggested? If it’s the former, should I consider sending it to another journal instead?

The list of changes the reviewers suggested, by the way, was several pages long! (As opposed to my last 2 articles, before I started my PhD, that needed only very minor changes.) They were mostly good points though, if I’m to be honest with myself. Some are causing us to look again at the basic assumptions of our article – can we really call this an intervention study? Why are we actually using Cronbach’s alpha to measure internal consistency, when information literacy is a multidimensional construct? Should we dwell on the point that IL is not unidimensional, and our evidence of this, which was one of the 3 major research questions in the article? If not, were all those factor analyses a waste of time?! Argh!! That was all I did (tried to do) last summer!

I was quite discouraged, needless to say, especially after all the work that went into this article, and the long wait for the response. All that effort cannot have been in vain! But after talking to Torstein, I have hope that we can improve the article and resubmit it to the journal. He says that this is totally normal – standard procedure.

I felt certain that the article was publishable when I submitted it. We sure worked hard on it. But maybe we became blind to our own thought-patterns and written words?

Quantitative research takes a lot of time and effort, between creating and piloting the measurement instruments, gathering the data (in many stages, with many different samples, in our case), analyzing and visualizing the data, and then actually writing the article.

This makes me think about whether the entire purpose of scientific endeavors should be to publish? Is that really the most important thing? After all, I have learned a lot doing this research, and I could disseminate findings at conferences or in my blog…

However, my PhD is article-based. I have to publish (or have ready for publication) at least 3 articles, in addition to writing the summary (kappa). In October I’ll be halfway through my 4-year period of funding for this project. I’m trying not to worry about not having published anything yet. This first article can be revised and (hopefully) published, and the second article is underway. And I have great supervisors (have I perhaps said this before?) who aren’t worried.

And now, onto the revisions…

 

“Fake news” in the corona-era

What exactly is fake news? How is it different from the more verifiable terms misinformation and disinformation?

Misinformation is information that is not true, but is believed to be true by the the person who disseminated it.

Disinformation is also false information, but it differs from misinformation in that the person who disseminates it, knows that it isn’t true. It is a deliberate lie, often with malicious intent.

(Hint: you can remember the difference by thinking of the word “diss.Disinformation often attempts to diss someone.)

Fake news has no formally accepted definition – in fact its meaning has changed significantly over the past 4 years. Previously, the term fake news was occasionally used for misinformation, but mainly for disinformation. A famous disinformation example is the “Pizzagate” incident, an attempt to influence the results of the 2016 presidential election in which candidate Hillary Clinton was accused of leading a child-abuse ring based in a Washington, DC pizzeria.

The term gained popularity during this election, but changed character when Trump began describing everything that he didn’t like in the media as “fake news.” One of the first examples of this was when he called reports of low attendance at his inauguration “fake news,” despite factual evidence of the meager turnout.

This makes the term “fake news” confusing and unhelpful, as it was previously mainly used for false information (both mis– and dis-), but is now frequently used for true information that someone doesn’t like. We should therefore avoid using the term “fake news” completely, and instead use “misinformation” and “disinformation.”

So where on the “information disorder spectrum” (as UNESCO calls the range of information pollution) are the many lies being spread about the corona-virus? Much of this is misinformation about the virus’ origin, prevention or treatments, spread by people – even presidents – who believe it to be true. This false information is often partly based on true information that has been twisted or reworked, as opposed to being purely fabricated. Some examples of later-debunked misinformation:

  • Vitamin D can prevent the corona-virus (spread on social media in Thailand)
  • Africans are not susceptible to corona-virus (spread on WhatsApp in Nigeria)
  • Drinking cow urine can cure COVID-19 (spread by a politician in India)
  • 5G towers cause corona-virus (spread in a French blog)
  • Your faith and God will protect you from contracting corona-virus (spread by several religious groups to their followers)
  • Injecting disinfectants can effectively treat the virus (spread by you know who, on live TV)

Some of the false information we hear about the corona-virus however is created and disseminated with malicious intent. Some examples of disinformation and various related conspiracy theories :

  • The US is the source of the virus, and they’re using it as “hybrid warfare” against China and Iran (spread on Iranian TV)
  • North Korea and China conspired together to create the corona-virus (spread on Fox News in the US)
  • The virus is a biological weapon created by the CIA to destroy China’s economy (spread on social media in Russia)
  • The corona-virus came from an accidental leak at a Chinese biological weapon lab in Wuhan (spread in some American news sources)

The spread of both mis- and disinformation can obviously have serious consequences, including injury, death or international conflict. WHO has therefore created a webpage to provide factual health-related information to bust many of the circulating myths about the virus.

Spreading false information is easier than ever – just click SHARE on your favorite social media. Research has shown that false information, because it can be so unbelievable and scary, spreads much faster and deeper than true information.

So what can you do to prevent the spread of false information?

  • Think critically!
  • Vote.
  • Check facts before you share posts on social media, even if you think that the information might truly be helpful to your friends. (There are several fact-checking websites out there, such as www.factcheck.org and www.snopes.com )
  • Be wary of anonymous sources.
  • Use trusted sources of information.
  • Check also the recommendations and advice provided on official government websites or international organizations such as WHO.
  • Tell your friends who spread dubious information to delete it.
  • And if you’re a college student, look at your library’s webpages for useful information about evaluating sources, and attend  courses offered by your wonderful librarians! 🙂

This is a pandemic. It’s affecting the entire world. If we want to defeat it, we have to be smart. So why am I posting this on my blog about information literacy? Because thinking critically is a huge part of being information literate!

 

 

First article is (nearly) done!

My data

This poor blog has been neglected for quite a while, since I’ve been concentrating all my efforts on analyzing data and writing the first article for my dissertation. As opposed to a monograph dissertation, I’m doing a “compilation thesis,” which is a series of articles (at least three), together with a summary section (kappa).

The first article, with working title “Knowing and doing: The development and testing of information literacy measures,” has been an enormous effort, as it’s based on data from several different samples, collected at different times. I wrote it, for the most part, together with my advisor Torstein, who has provided excellent guidance throughout this process. Just the right combination of “here’s the answer” and “here’s how to do it yourself.” (Plus a good dose of neurons, logic, experience, and patience!)

If I’d written this article alone, it would’ve been done much sooner, but it would’ve been much worse. I’ve learned so much through this process, especially about how to structure an article based on empirical data, and the logic behind each section. It sounds so easy – Introduction, Methods, Results, Discussion – but it was actually quite difficult to separate these sections while preserving readability.

This article could’ve potentially been several, since each of its 3 main goals is nearly enough for an article in itself (especially the first):

  1. “to develop information literacy measures that are applicable across academic disciplines, and that are brief and easy to administer, but still likely to be reliable and to support valid interpretations”
  2. “to determine whether what students know about IL corresponds to what they actually do when finding, evaluating and using sources”
  3. “to help illuminate the question of whether IL should be conceived of as a coherent, unitary construct, or a set of disparate and more loosely related components”

Just look at 2 terms in the first goal: reliable and valid. I had no idea how important these concepts are when developing measurement instruments, how many analyses would have to be performed in order to “conclude” anything about reliability and validity, and how many words would be needed to describe these analyses.

We’ve had to economize with words, which surprisingly, is quite difficult. The journal we’re aiming to publish it in has a limit of 8000 words, and we’re currently at ca. 7900.

The research is churning in my head whether I’m sleeping or skiing. I’m proud of myself for being disciplined, concentrated, and persevering throughout the process of collecting and analyzing the data, and then writing the article. Nothing has come easily – I’ve worked hard for everything I’ve accomplished. Luckily, I haven’t had too many other things going on for the past months (social isolation suits me just fine these days!), and could immerse myself in my work without losing track along the way.

The next blog post will be about the importance of information literacy the age of Covid-19. 🙂

My data doesn’t make sense

I’ve spent months collecting and analyzing data from students regarding their information literacy knowledge and skills. For one study, I’ve used a survey to measure their knowledge and two written assignments to measure their skills. The idea is to see if there’s a correlation between these levels, in other words – is what they know reflected in what they do? (multiple regression analysis)

There are all kinds of analyses to perform even before asking that question however, including:

  • is the survey reliable? (using e.g. a split-half reliability test)
  • do survey questions (items) form logical groups (factors)? (factor analysis)
  • are the tests valid? (lots of analyses)

So far, my results in this study are puzzling, to say the least. Correlations that I’d expected to see in my data, do not exist. For example, there’s a negative correlation between the amount of higher education students have had, and their levels of IL. Huh? The more education, the less they know??

As for reliability, whether my survey items produce accurate, reproducible, and consistent results, I get negative results sometimes! (See clip from SPSS below.) How is this possible, when – in my eyes – the survey questions (inside their 3 categories) are related to each other?

I’ve double-checked that my data is coded correctly, so that’s not the problem. It just doesn’t make any sense! It seems as though students have answered totally randomly on the survey. They may know one answer about the critical evaluation of information, but not the next, even though the question is quite similar.

If I could just find ONE meaningful correlation or significant result in this study, I’d be satisfied, but so far I’ve found none. I’m not finished collecting data, of course, so perhaps something meaningful will magically appear in future results. But so far, I’m just perplexed, and yep – frustrated. Argh!

I’ll have to start thinking “outside of the box” in order to interpret these results. Maybe the holiday break will help my brain to reboot? It’s all extremely challenging, but at least I’m learning to do research…

Nagging questions like “Will I be able to publish these seemingly meaningless results?” and “Can I get a PhD even if my data doesn’t make sense?” will hopefully take a place on the back-burner for the time being. There are certain things that I simply can’t do anything about, so it’s best to not focus on them. I’ll just plow on, doing the best that I can.

(And for the astronomically-interested: in two days is the winter solstice. On this day, at its highest, the sun here in Tromsø will be ca. 5 degrees BELOW the horizon. Not even the highest clouds are touched by its light. There’s one more month of polar night.)